00:00:00.001 Started by upstream project "autotest-nightly" build number 3629 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3011 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.170 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.429 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.442 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.454 Checking out Revision 6201031def5bfb7f90a861bc162998684798607e (FETCH_HEAD) 00:00:04.454 > git config core.sparsecheckout # timeout=10 00:00:04.465 > git read-tree -mu HEAD # timeout=10 00:00:04.480 > git checkout -f 6201031def5bfb7f90a861bc162998684798607e # timeout=5 00:00:04.499 Commit message: "scripts/kid: Add issue 3354" 00:00:04.499 > git rev-list --no-walk 6201031def5bfb7f90a861bc162998684798607e # timeout=10 00:00:04.612 [Pipeline] Start of Pipeline 00:00:04.626 [Pipeline] library 00:00:04.628 Loading library shm_lib@master 00:00:04.628 Library shm_lib@master is cached. Copying from home. 00:00:04.640 [Pipeline] node 00:00:04.649 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:04.651 [Pipeline] { 00:00:04.661 [Pipeline] catchError 00:00:04.662 [Pipeline] { 00:00:04.675 [Pipeline] wrap 00:00:04.686 [Pipeline] { 00:00:04.694 [Pipeline] stage 00:00:04.696 [Pipeline] { (Prologue) 00:00:04.715 [Pipeline] echo 00:00:04.716 Node: VM-host-SM9 00:00:04.721 [Pipeline] cleanWs 00:00:04.728 [WS-CLEANUP] Deleting project workspace... 00:00:04.728 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.734 [WS-CLEANUP] done 00:00:04.886 [Pipeline] setCustomBuildProperty 00:00:04.941 [Pipeline] nodesByLabel 00:00:04.942 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.951 [Pipeline] httpRequest 00:00:04.954 HttpMethod: GET 00:00:04.955 URL: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.962 Sending request to url: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:04.964 Response Code: HTTP/1.1 200 OK 00:00:04.964 Success: Status code 200 is in the accepted range: 200,404 00:00:04.964 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:06.021 [Pipeline] sh 00:00:06.297 + tar --no-same-owner -xf jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:06.314 [Pipeline] httpRequest 00:00:06.317 HttpMethod: GET 00:00:06.318 URL: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:06.320 Sending request to url: http://10.211.164.96/packages/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:00:06.349 Response Code: HTTP/1.1 200 OK 00:00:06.350 Success: Status code 200 is in the accepted range: 200,404 00:00:06.350 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:01:15.095 [Pipeline] sh 00:01:15.375 + tar --no-same-owner -xf spdk_06472fb6d0c234046253a9989fef790e0cbb219e.tar.gz 00:01:18.672 [Pipeline] sh 00:01:18.949 + git -C spdk log --oneline -n5 00:01:18.949 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:01:18.949 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:01:18.949 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:01:18.949 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:01:18.949 c11e5c113 bdev: introduce bdev_nvme_cdw12 and cdw13, and add them to ext_opts 00:01:18.965 [Pipeline] writeFile 00:01:18.981 [Pipeline] sh 00:01:19.258 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.269 [Pipeline] sh 00:01:19.546 + cat autorun-spdk.conf 00:01:19.546 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.546 SPDK_TEST_NVMF=1 00:01:19.546 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.546 SPDK_TEST_VFIOUSER=1 00:01:19.546 SPDK_TEST_USDT=1 00:01:19.546 SPDK_RUN_UBSAN=1 00:01:19.546 SPDK_TEST_NVMF_MDNS=1 00:01:19.546 NET_TYPE=virt 00:01:19.546 SPDK_JSONRPC_GO_CLIENT=1 00:01:19.546 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.554 RUN_NIGHTLY=1 00:01:19.557 [Pipeline] } 00:01:19.573 [Pipeline] // stage 00:01:19.589 [Pipeline] stage 00:01:19.592 [Pipeline] { (Run VM) 00:01:19.606 [Pipeline] sh 00:01:19.926 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.926 + echo 'Start stage prepare_nvme.sh' 00:01:19.926 Start stage prepare_nvme.sh 00:01:19.926 + [[ -n 4 ]] 00:01:19.926 + disk_prefix=ex4 00:01:19.926 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:01:19.927 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:01:19.927 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:01:19.927 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.927 ++ SPDK_TEST_NVMF=1 00:01:19.927 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.927 ++ SPDK_TEST_VFIOUSER=1 00:01:19.927 ++ SPDK_TEST_USDT=1 00:01:19.927 ++ SPDK_RUN_UBSAN=1 00:01:19.927 ++ SPDK_TEST_NVMF_MDNS=1 00:01:19.927 ++ NET_TYPE=virt 00:01:19.927 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:19.927 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.927 ++ RUN_NIGHTLY=1 00:01:19.927 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:19.927 + nvme_files=() 00:01:19.927 + declare -A nvme_files 00:01:19.927 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.927 + nvme_files['nvme.img']=5G 00:01:19.927 + nvme_files['nvme-cmb.img']=5G 00:01:19.927 + nvme_files['nvme-multi0.img']=4G 00:01:19.927 + nvme_files['nvme-multi1.img']=4G 00:01:19.927 + nvme_files['nvme-multi2.img']=4G 00:01:19.927 + nvme_files['nvme-openstack.img']=8G 00:01:19.927 + nvme_files['nvme-zns.img']=5G 00:01:19.927 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.927 + (( SPDK_TEST_FTL == 1 )) 00:01:19.927 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.927 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:19.927 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.927 + for nvme in "${!nvme_files[@]}" 00:01:19.927 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:20.185 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.185 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:20.185 + echo 'End stage prepare_nvme.sh' 00:01:20.185 End stage prepare_nvme.sh 00:01:20.196 [Pipeline] sh 00:01:20.473 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:20.473 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:20.473 00:01:20.473 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:01:20.473 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:01:20.473 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:20.473 HELP=0 00:01:20.473 DRY_RUN=0 00:01:20.473 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:20.473 NVME_DISKS_TYPE=nvme,nvme, 00:01:20.473 NVME_AUTO_CREATE=0 00:01:20.473 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:20.473 NVME_CMB=,, 00:01:20.473 NVME_PMR=,, 00:01:20.473 NVME_ZNS=,, 00:01:20.473 NVME_MS=,, 00:01:20.473 NVME_FDP=,, 00:01:20.473 SPDK_VAGRANT_DISTRO=fedora38 00:01:20.473 SPDK_VAGRANT_VMCPU=10 00:01:20.473 SPDK_VAGRANT_VMRAM=12288 00:01:20.473 SPDK_VAGRANT_PROVIDER=libvirt 00:01:20.473 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:20.473 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:20.473 SPDK_OPENSTACK_NETWORK=0 00:01:20.474 VAGRANT_PACKAGE_BOX=0 00:01:20.474 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:20.474 FORCE_DISTRO=true 00:01:20.474 VAGRANT_BOX_VERSION= 00:01:20.474 EXTRA_VAGRANTFILES= 00:01:20.474 NIC_MODEL=e1000 00:01:20.474 00:01:20.474 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:01:20.474 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:23.757 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.014 ==> default: Creating image (snapshot of base box volume). 00:01:24.272 ==> default: Creating domain with the following settings... 00:01:24.272 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714064693_f206e964d6d190a5143a 00:01:24.272 ==> default: -- Domain type: kvm 00:01:24.272 ==> default: -- Cpus: 10 00:01:24.272 ==> default: -- Feature: acpi 00:01:24.272 ==> default: -- Feature: apic 00:01:24.272 ==> default: -- Feature: pae 00:01:24.272 ==> default: -- Memory: 12288M 00:01:24.272 ==> default: -- Memory Backing: hugepages: 00:01:24.272 ==> default: -- Management MAC: 00:01:24.272 ==> default: -- Loader: 00:01:24.272 ==> default: -- Nvram: 00:01:24.272 ==> default: -- Base box: spdk/fedora38 00:01:24.272 ==> default: -- Storage pool: default 00:01:24.272 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714064693_f206e964d6d190a5143a.img (20G) 00:01:24.272 ==> default: -- Volume Cache: default 00:01:24.272 ==> default: -- Kernel: 00:01:24.272 ==> default: -- Initrd: 00:01:24.272 ==> default: -- Graphics Type: vnc 00:01:24.272 ==> default: -- Graphics Port: -1 00:01:24.272 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.272 ==> default: -- Graphics Password: Not defined 00:01:24.272 ==> default: -- Video Type: cirrus 00:01:24.272 ==> default: -- Video VRAM: 9216 00:01:24.272 ==> default: -- Sound Type: 00:01:24.272 ==> default: -- Keymap: en-us 00:01:24.272 ==> default: -- TPM Path: 00:01:24.272 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.272 ==> default: -- Command line args: 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.272 ==> default: -> value=-drive, 00:01:24.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.272 ==> default: -> value=-drive, 00:01:24.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.272 ==> default: -> value=-drive, 00:01:24.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.272 ==> default: -> value=-drive, 00:01:24.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.272 ==> default: -> value=-device, 00:01:24.272 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.272 ==> default: Creating shared folders metadata... 00:01:24.272 ==> default: Starting domain. 00:01:25.652 ==> default: Waiting for domain to get an IP address... 00:01:43.763 ==> default: Waiting for SSH to become available... 00:01:44.699 ==> default: Configuring and enabling network interfaces... 00:01:48.886 default: SSH address: 192.168.121.130:22 00:01:48.886 default: SSH username: vagrant 00:01:48.886 default: SSH auth method: private key 00:01:51.424 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.988 ==> default: Mounting SSHFS shared folder... 00:01:59.891 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.891 ==> default: Checking Mount.. 00:02:01.269 ==> default: Folder Successfully Mounted! 00:02:01.269 ==> default: Running provisioner: file... 00:02:01.837 default: ~/.gitconfig => .gitconfig 00:02:02.405 00:02:02.405 SUCCESS! 00:02:02.405 00:02:02.405 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:02.405 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.405 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:02.405 00:02:02.415 [Pipeline] } 00:02:02.435 [Pipeline] // stage 00:02:02.443 [Pipeline] dir 00:02:02.443 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:02:02.445 [Pipeline] { 00:02:02.458 [Pipeline] catchError 00:02:02.460 [Pipeline] { 00:02:02.476 [Pipeline] sh 00:02:02.790 + vagrant ssh-config --host vagrant 00:02:02.790 + sed -ne /^Host/,$p 00:02:02.790 + tee ssh_conf 00:02:06.077 Host vagrant 00:02:06.077 HostName 192.168.121.130 00:02:06.077 User vagrant 00:02:06.077 Port 22 00:02:06.077 UserKnownHostsFile /dev/null 00:02:06.077 StrictHostKeyChecking no 00:02:06.077 PasswordAuthentication no 00:02:06.077 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:06.077 IdentitiesOnly yes 00:02:06.077 LogLevel FATAL 00:02:06.077 ForwardAgent yes 00:02:06.077 ForwardX11 yes 00:02:06.077 00:02:06.090 [Pipeline] withEnv 00:02:06.092 [Pipeline] { 00:02:06.107 [Pipeline] sh 00:02:06.383 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.383 source /etc/os-release 00:02:06.383 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.383 # Minimal, systemd-like check. 00:02:06.383 if [[ -e /.dockerenv ]]; then 00:02:06.383 # Clear garbage from the node's name: 00:02:06.383 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.383 # $HOSTNAME is the actual container id 00:02:06.383 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.384 if mountpoint -q /etc/hostname; then 00:02:06.384 # We can assume this is a mount from a host where container is running, 00:02:06.384 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.384 container="$(< /etc/hostname) ($agent)" 00:02:06.384 else 00:02:06.384 # Fallback 00:02:06.384 container=$agent 00:02:06.384 fi 00:02:06.384 fi 00:02:06.384 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.384 00:02:06.652 [Pipeline] } 00:02:06.672 [Pipeline] // withEnv 00:02:06.679 [Pipeline] setCustomBuildProperty 00:02:06.692 [Pipeline] stage 00:02:06.694 [Pipeline] { (Tests) 00:02:06.712 [Pipeline] sh 00:02:06.990 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.260 [Pipeline] timeout 00:02:07.260 Timeout set to expire in 40 min 00:02:07.262 [Pipeline] { 00:02:07.277 [Pipeline] sh 00:02:07.555 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.122 HEAD is now at 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:02:08.134 [Pipeline] sh 00:02:08.424 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.707 [Pipeline] sh 00:02:08.985 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.257 [Pipeline] sh 00:02:09.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:09.794 ++ readlink -f spdk_repo 00:02:09.794 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.794 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.794 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.794 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.794 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.794 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.794 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.794 + cd /home/vagrant/spdk_repo 00:02:09.794 + source /etc/os-release 00:02:09.794 ++ NAME='Fedora Linux' 00:02:09.794 ++ VERSION='38 (Cloud Edition)' 00:02:09.794 ++ ID=fedora 00:02:09.794 ++ VERSION_ID=38 00:02:09.794 ++ VERSION_CODENAME= 00:02:09.794 ++ PLATFORM_ID=platform:f38 00:02:09.794 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:09.794 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.794 ++ LOGO=fedora-logo-icon 00:02:09.794 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:09.794 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.794 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:09.794 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.794 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.794 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.794 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:09.794 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.794 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:09.794 ++ SUPPORT_END=2024-05-14 00:02:09.794 ++ VARIANT='Cloud Edition' 00:02:09.794 ++ VARIANT_ID=cloud 00:02:09.794 + uname -a 00:02:09.794 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:09.794 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:10.053 Hugepages 00:02:10.053 node hugesize free / total 00:02:10.053 node0 1048576kB 0 / 0 00:02:10.053 node0 2048kB 0 / 0 00:02:10.053 00:02:10.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.312 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.312 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.312 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:10.312 + rm -f /tmp/spdk-ld-path 00:02:10.312 + source autorun-spdk.conf 00:02:10.312 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.312 ++ SPDK_TEST_NVMF=1 00:02:10.312 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.312 ++ SPDK_TEST_VFIOUSER=1 00:02:10.312 ++ SPDK_TEST_USDT=1 00:02:10.312 ++ SPDK_RUN_UBSAN=1 00:02:10.312 ++ SPDK_TEST_NVMF_MDNS=1 00:02:10.312 ++ NET_TYPE=virt 00:02:10.312 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:10.312 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.312 ++ RUN_NIGHTLY=1 00:02:10.312 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.312 + [[ -n '' ]] 00:02:10.312 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.312 + for M in /var/spdk/build-*-manifest.txt 00:02:10.312 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.312 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.312 + for M in /var/spdk/build-*-manifest.txt 00:02:10.312 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.312 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.312 ++ uname 00:02:10.312 + [[ Linux == \L\i\n\u\x ]] 00:02:10.312 + sudo dmesg -T 00:02:10.312 + sudo dmesg --clear 00:02:10.312 + dmesg_pid=5155 00:02:10.312 + [[ Fedora Linux == FreeBSD ]] 00:02:10.312 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.312 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.312 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.312 + sudo dmesg -Tw 00:02:10.312 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.312 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.312 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.312 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.312 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.312 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.312 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.312 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.312 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.312 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.312 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.312 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.312 Test configuration: 00:02:10.312 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.312 SPDK_TEST_NVMF=1 00:02:10.312 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.312 SPDK_TEST_VFIOUSER=1 00:02:10.312 SPDK_TEST_USDT=1 00:02:10.312 SPDK_RUN_UBSAN=1 00:02:10.312 SPDK_TEST_NVMF_MDNS=1 00:02:10.312 NET_TYPE=virt 00:02:10.312 SPDK_JSONRPC_GO_CLIENT=1 00:02:10.312 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.312 RUN_NIGHTLY=1 17:05:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.312 17:05:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.312 17:05:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.312 17:05:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.312 17:05:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.312 17:05:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.312 17:05:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.312 17:05:40 -- paths/export.sh@5 -- $ export PATH 00:02:10.312 17:05:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.312 17:05:40 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.571 17:05:40 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:10.571 17:05:40 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714064740.XXXXXX 00:02:10.572 17:05:40 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714064740.oKP5Dp 00:02:10.572 17:05:40 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:10.572 17:05:40 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:10.572 17:05:40 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.572 17:05:40 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.572 17:05:40 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.572 17:05:40 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:10.572 17:05:40 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:02:10.572 17:05:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.572 17:05:40 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:10.572 17:05:40 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:02:10.572 17:05:40 -- pm/common@17 -- $ local monitor 00:02:10.572 17:05:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.572 17:05:40 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5189 00:02:10.572 17:05:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.572 17:05:40 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5191 00:02:10.572 17:05:40 -- pm/common@21 -- $ date +%s 00:02:10.572 17:05:40 -- pm/common@26 -- $ sleep 1 00:02:10.572 17:05:40 -- pm/common@21 -- $ date +%s 00:02:10.572 17:05:40 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714064740 00:02:10.572 17:05:40 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714064740 00:02:10.572 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714064740_collect-vmstat.pm.log 00:02:10.572 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714064740_collect-cpu-load.pm.log 00:02:11.507 17:05:41 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:02:11.507 17:05:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.508 17:05:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.508 17:05:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.508 17:05:41 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.508 Thu Apr 25 05:05:41 PM UTC 2024 00:02:11.508 17:05:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.508 v24.05-pre-448-g06472fb6d 00:02:11.508 17:05:41 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.508 17:05:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.508 17:05:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.508 17:05:41 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:11.508 17:05:41 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:11.508 17:05:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.508 ************************************ 00:02:11.508 START TEST ubsan 00:02:11.508 ************************************ 00:02:11.508 using ubsan 00:02:11.508 17:05:41 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:02:11.508 00:02:11.508 real 0m0.000s 00:02:11.508 user 0m0.000s 00:02:11.508 sys 0m0.000s 00:02:11.508 17:05:41 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:11.508 17:05:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.508 ************************************ 00:02:11.508 END TEST ubsan 00:02:11.508 ************************************ 00:02:11.508 17:05:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.508 17:05:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.508 17:05:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.508 17:05:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:11.767 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.767 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.334 Using 'verbs' RDMA provider 00:02:27.777 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.975 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.975 go version go1.21.1 linux/amd64 00:02:39.975 Creating mk/config.mk...done. 00:02:39.975 Creating mk/cc.flags.mk...done. 00:02:39.975 Type 'make' to build. 00:02:39.975 17:06:08 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:39.975 17:06:08 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:39.975 17:06:08 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:39.975 17:06:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.975 ************************************ 00:02:39.975 START TEST make 00:02:39.975 ************************************ 00:02:39.975 17:06:08 -- common/autotest_common.sh@1111 -- $ make -j10 00:02:39.975 make[1]: Nothing to be done for 'all'. 00:02:40.592 The Meson build system 00:02:40.592 Version: 1.3.1 00:02:40.592 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:40.592 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:40.592 Build type: native build 00:02:40.592 Project name: libvfio-user 00:02:40.592 Project version: 0.0.1 00:02:40.592 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:40.592 C linker for the host machine: cc ld.bfd 2.39-16 00:02:40.592 Host machine cpu family: x86_64 00:02:40.592 Host machine cpu: x86_64 00:02:40.592 Run-time dependency threads found: YES 00:02:40.592 Library dl found: YES 00:02:40.592 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:40.592 Run-time dependency json-c found: YES 0.17 00:02:40.592 Run-time dependency cmocka found: YES 1.1.7 00:02:40.592 Program pytest-3 found: NO 00:02:40.592 Program flake8 found: NO 00:02:40.592 Program misspell-fixer found: NO 00:02:40.592 Program restructuredtext-lint found: NO 00:02:40.592 Program valgrind found: YES (/usr/bin/valgrind) 00:02:40.592 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:40.592 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:40.592 Compiler for C supports arguments -Wwrite-strings: YES 00:02:40.592 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.592 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:40.592 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:40.592 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:40.592 Build targets in project: 8 00:02:40.592 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:40.592 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:40.592 00:02:40.592 libvfio-user 0.0.1 00:02:40.592 00:02:40.592 User defined options 00:02:40.592 buildtype : debug 00:02:40.592 default_library: shared 00:02:40.592 libdir : /usr/local/lib 00:02:40.592 00:02:40.592 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.158 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:41.158 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:41.158 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:41.158 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:41.417 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:41.417 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:41.417 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:41.417 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:41.417 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:41.417 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:41.417 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:41.417 [11/37] Compiling C object samples/null.p/null.c.o 00:02:41.417 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:41.417 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:41.417 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:41.417 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:41.417 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:41.676 [17/37] Compiling C object samples/client.p/client.c.o 00:02:41.676 [18/37] Compiling C object samples/server.p/server.c.o 00:02:41.676 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:41.676 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:41.676 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:41.676 [22/37] Linking target samples/client 00:02:41.676 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:41.676 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:41.676 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:41.676 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:41.676 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:41.676 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:41.676 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:41.934 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:41.934 [31/37] Linking target test/unit_tests 00:02:41.934 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:41.934 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:41.934 [34/37] Linking target samples/gpio-pci-idio-16 00:02:41.934 [35/37] Linking target samples/server 00:02:41.934 [36/37] Linking target samples/null 00:02:41.934 [37/37] Linking target samples/lspci 00:02:41.934 INFO: autodetecting backend as ninja 00:02:41.934 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:41.934 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:42.500 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:42.500 ninja: no work to do. 00:02:52.494 The Meson build system 00:02:52.494 Version: 1.3.1 00:02:52.494 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:52.494 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.494 Build type: native build 00:02:52.494 Program cat found: YES (/usr/bin/cat) 00:02:52.494 Project name: DPDK 00:02:52.494 Project version: 23.11.0 00:02:52.494 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:52.494 C linker for the host machine: cc ld.bfd 2.39-16 00:02:52.494 Host machine cpu family: x86_64 00:02:52.494 Host machine cpu: x86_64 00:02:52.494 Message: ## Building in Developer Mode ## 00:02:52.494 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.494 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:52.494 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.494 Program python3 found: YES (/usr/bin/python3) 00:02:52.494 Program cat found: YES (/usr/bin/cat) 00:02:52.494 Compiler for C supports arguments -march=native: YES 00:02:52.494 Checking for size of "void *" : 8 00:02:52.494 Checking for size of "void *" : 8 (cached) 00:02:52.494 Library m found: YES 00:02:52.494 Library numa found: YES 00:02:52.494 Has header "numaif.h" : YES 00:02:52.494 Library fdt found: NO 00:02:52.494 Library execinfo found: NO 00:02:52.494 Has header "execinfo.h" : YES 00:02:52.494 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:52.494 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.494 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.494 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.494 Run-time dependency openssl found: YES 3.0.9 00:02:52.494 Run-time dependency libpcap found: YES 1.10.4 00:02:52.494 Has header "pcap.h" with dependency libpcap: YES 00:02:52.494 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.494 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.494 Compiler for C supports arguments -Wformat: YES 00:02:52.494 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.494 Compiler for C supports arguments -Wformat-security: NO 00:02:52.494 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.494 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.494 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.494 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.494 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.494 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.494 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.494 Compiler for C supports arguments -Wundef: YES 00:02:52.494 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.494 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.494 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.494 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.494 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.494 Program objdump found: YES (/usr/bin/objdump) 00:02:52.494 Compiler for C supports arguments -mavx512f: YES 00:02:52.494 Checking if "AVX512 checking" compiles: YES 00:02:52.494 Fetching value of define "__SSE4_2__" : 1 00:02:52.494 Fetching value of define "__AES__" : 1 00:02:52.494 Fetching value of define "__AVX__" : 1 00:02:52.494 Fetching value of define "__AVX2__" : 1 00:02:52.494 Fetching value of define "__AVX512BW__" : (undefined) 00:02:52.494 Fetching value of define "__AVX512CD__" : (undefined) 00:02:52.494 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:52.494 Fetching value of define "__AVX512F__" : (undefined) 00:02:52.495 Fetching value of define "__AVX512VL__" : (undefined) 00:02:52.495 Fetching value of define "__PCLMUL__" : 1 00:02:52.495 Fetching value of define "__RDRND__" : 1 00:02:52.495 Fetching value of define "__RDSEED__" : 1 00:02:52.495 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.495 Fetching value of define "__znver1__" : (undefined) 00:02:52.495 Fetching value of define "__znver2__" : (undefined) 00:02:52.495 Fetching value of define "__znver3__" : (undefined) 00:02:52.495 Fetching value of define "__znver4__" : (undefined) 00:02:52.495 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.495 Message: lib/log: Defining dependency "log" 00:02:52.495 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.495 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.495 Checking for function "getentropy" : NO 00:02:52.495 Message: lib/eal: Defining dependency "eal" 00:02:52.495 Message: lib/ring: Defining dependency "ring" 00:02:52.495 Message: lib/rcu: Defining dependency "rcu" 00:02:52.495 Message: lib/mempool: Defining dependency "mempool" 00:02:52.495 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.495 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.495 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.495 Compiler for C supports arguments -mpclmul: YES 00:02:52.495 Compiler for C supports arguments -maes: YES 00:02:52.495 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.495 Compiler for C supports arguments -mavx512bw: YES 00:02:52.495 Compiler for C supports arguments -mavx512dq: YES 00:02:52.495 Compiler for C supports arguments -mavx512vl: YES 00:02:52.495 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.495 Compiler for C supports arguments -mavx2: YES 00:02:52.495 Compiler for C supports arguments -mavx: YES 00:02:52.495 Message: lib/net: Defining dependency "net" 00:02:52.495 Message: lib/meter: Defining dependency "meter" 00:02:52.495 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.495 Message: lib/pci: Defining dependency "pci" 00:02:52.495 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.495 Message: lib/hash: Defining dependency "hash" 00:02:52.495 Message: lib/timer: Defining dependency "timer" 00:02:52.495 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.495 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.495 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.495 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.495 Message: lib/power: Defining dependency "power" 00:02:52.495 Message: lib/reorder: Defining dependency "reorder" 00:02:52.495 Message: lib/security: Defining dependency "security" 00:02:52.495 Has header "linux/userfaultfd.h" : YES 00:02:52.495 Has header "linux/vduse.h" : YES 00:02:52.495 Message: lib/vhost: Defining dependency "vhost" 00:02:52.495 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.495 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.495 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.495 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.495 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:52.495 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:52.495 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:52.495 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:52.495 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:52.495 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:52.495 Program doxygen found: YES (/usr/bin/doxygen) 00:02:52.495 Configuring doxy-api-html.conf using configuration 00:02:52.495 Configuring doxy-api-man.conf using configuration 00:02:52.495 Program mandb found: YES (/usr/bin/mandb) 00:02:52.495 Program sphinx-build found: NO 00:02:52.495 Configuring rte_build_config.h using configuration 00:02:52.495 Message: 00:02:52.495 ================= 00:02:52.495 Applications Enabled 00:02:52.495 ================= 00:02:52.495 00:02:52.495 apps: 00:02:52.495 00:02:52.495 00:02:52.495 Message: 00:02:52.495 ================= 00:02:52.495 Libraries Enabled 00:02:52.495 ================= 00:02:52.495 00:02:52.495 libs: 00:02:52.495 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.495 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:52.495 cryptodev, dmadev, power, reorder, security, vhost, 00:02:52.495 00:02:52.495 Message: 00:02:52.495 =============== 00:02:52.495 Drivers Enabled 00:02:52.495 =============== 00:02:52.495 00:02:52.495 common: 00:02:52.495 00:02:52.495 bus: 00:02:52.495 pci, vdev, 00:02:52.495 mempool: 00:02:52.495 ring, 00:02:52.495 dma: 00:02:52.495 00:02:52.495 net: 00:02:52.495 00:02:52.495 crypto: 00:02:52.495 00:02:52.495 compress: 00:02:52.495 00:02:52.495 vdpa: 00:02:52.495 00:02:52.495 00:02:52.495 Message: 00:02:52.495 ================= 00:02:52.495 Content Skipped 00:02:52.495 ================= 00:02:52.495 00:02:52.495 apps: 00:02:52.495 dumpcap: explicitly disabled via build config 00:02:52.495 graph: explicitly disabled via build config 00:02:52.495 pdump: explicitly disabled via build config 00:02:52.495 proc-info: explicitly disabled via build config 00:02:52.495 test-acl: explicitly disabled via build config 00:02:52.495 test-bbdev: explicitly disabled via build config 00:02:52.495 test-cmdline: explicitly disabled via build config 00:02:52.495 test-compress-perf: explicitly disabled via build config 00:02:52.495 test-crypto-perf: explicitly disabled via build config 00:02:52.495 test-dma-perf: explicitly disabled via build config 00:02:52.495 test-eventdev: explicitly disabled via build config 00:02:52.495 test-fib: explicitly disabled via build config 00:02:52.495 test-flow-perf: explicitly disabled via build config 00:02:52.495 test-gpudev: explicitly disabled via build config 00:02:52.495 test-mldev: explicitly disabled via build config 00:02:52.495 test-pipeline: explicitly disabled via build config 00:02:52.495 test-pmd: explicitly disabled via build config 00:02:52.495 test-regex: explicitly disabled via build config 00:02:52.495 test-sad: explicitly disabled via build config 00:02:52.495 test-security-perf: explicitly disabled via build config 00:02:52.495 00:02:52.495 libs: 00:02:52.495 metrics: explicitly disabled via build config 00:02:52.495 acl: explicitly disabled via build config 00:02:52.495 bbdev: explicitly disabled via build config 00:02:52.495 bitratestats: explicitly disabled via build config 00:02:52.495 bpf: explicitly disabled via build config 00:02:52.495 cfgfile: explicitly disabled via build config 00:02:52.495 distributor: explicitly disabled via build config 00:02:52.495 efd: explicitly disabled via build config 00:02:52.495 eventdev: explicitly disabled via build config 00:02:52.495 dispatcher: explicitly disabled via build config 00:02:52.495 gpudev: explicitly disabled via build config 00:02:52.495 gro: explicitly disabled via build config 00:02:52.495 gso: explicitly disabled via build config 00:02:52.495 ip_frag: explicitly disabled via build config 00:02:52.495 jobstats: explicitly disabled via build config 00:02:52.495 latencystats: explicitly disabled via build config 00:02:52.495 lpm: explicitly disabled via build config 00:02:52.495 member: explicitly disabled via build config 00:02:52.495 pcapng: explicitly disabled via build config 00:02:52.495 rawdev: explicitly disabled via build config 00:02:52.495 regexdev: explicitly disabled via build config 00:02:52.495 mldev: explicitly disabled via build config 00:02:52.495 rib: explicitly disabled via build config 00:02:52.495 sched: explicitly disabled via build config 00:02:52.495 stack: explicitly disabled via build config 00:02:52.495 ipsec: explicitly disabled via build config 00:02:52.495 pdcp: explicitly disabled via build config 00:02:52.495 fib: explicitly disabled via build config 00:02:52.495 port: explicitly disabled via build config 00:02:52.495 pdump: explicitly disabled via build config 00:02:52.495 table: explicitly disabled via build config 00:02:52.495 pipeline: explicitly disabled via build config 00:02:52.495 graph: explicitly disabled via build config 00:02:52.495 node: explicitly disabled via build config 00:02:52.495 00:02:52.495 drivers: 00:02:52.495 common/cpt: not in enabled drivers build config 00:02:52.495 common/dpaax: not in enabled drivers build config 00:02:52.495 common/iavf: not in enabled drivers build config 00:02:52.495 common/idpf: not in enabled drivers build config 00:02:52.495 common/mvep: not in enabled drivers build config 00:02:52.495 common/octeontx: not in enabled drivers build config 00:02:52.495 bus/auxiliary: not in enabled drivers build config 00:02:52.495 bus/cdx: not in enabled drivers build config 00:02:52.495 bus/dpaa: not in enabled drivers build config 00:02:52.495 bus/fslmc: not in enabled drivers build config 00:02:52.495 bus/ifpga: not in enabled drivers build config 00:02:52.495 bus/platform: not in enabled drivers build config 00:02:52.495 bus/vmbus: not in enabled drivers build config 00:02:52.495 common/cnxk: not in enabled drivers build config 00:02:52.495 common/mlx5: not in enabled drivers build config 00:02:52.495 common/nfp: not in enabled drivers build config 00:02:52.495 common/qat: not in enabled drivers build config 00:02:52.495 common/sfc_efx: not in enabled drivers build config 00:02:52.495 mempool/bucket: not in enabled drivers build config 00:02:52.495 mempool/cnxk: not in enabled drivers build config 00:02:52.495 mempool/dpaa: not in enabled drivers build config 00:02:52.495 mempool/dpaa2: not in enabled drivers build config 00:02:52.495 mempool/octeontx: not in enabled drivers build config 00:02:52.495 mempool/stack: not in enabled drivers build config 00:02:52.495 dma/cnxk: not in enabled drivers build config 00:02:52.495 dma/dpaa: not in enabled drivers build config 00:02:52.495 dma/dpaa2: not in enabled drivers build config 00:02:52.495 dma/hisilicon: not in enabled drivers build config 00:02:52.495 dma/idxd: not in enabled drivers build config 00:02:52.495 dma/ioat: not in enabled drivers build config 00:02:52.495 dma/skeleton: not in enabled drivers build config 00:02:52.495 net/af_packet: not in enabled drivers build config 00:02:52.495 net/af_xdp: not in enabled drivers build config 00:02:52.495 net/ark: not in enabled drivers build config 00:02:52.495 net/atlantic: not in enabled drivers build config 00:02:52.495 net/avp: not in enabled drivers build config 00:02:52.495 net/axgbe: not in enabled drivers build config 00:02:52.495 net/bnx2x: not in enabled drivers build config 00:02:52.495 net/bnxt: not in enabled drivers build config 00:02:52.495 net/bonding: not in enabled drivers build config 00:02:52.495 net/cnxk: not in enabled drivers build config 00:02:52.495 net/cpfl: not in enabled drivers build config 00:02:52.495 net/cxgbe: not in enabled drivers build config 00:02:52.495 net/dpaa: not in enabled drivers build config 00:02:52.495 net/dpaa2: not in enabled drivers build config 00:02:52.495 net/e1000: not in enabled drivers build config 00:02:52.495 net/ena: not in enabled drivers build config 00:02:52.495 net/enetc: not in enabled drivers build config 00:02:52.495 net/enetfec: not in enabled drivers build config 00:02:52.495 net/enic: not in enabled drivers build config 00:02:52.495 net/failsafe: not in enabled drivers build config 00:02:52.495 net/fm10k: not in enabled drivers build config 00:02:52.495 net/gve: not in enabled drivers build config 00:02:52.495 net/hinic: not in enabled drivers build config 00:02:52.495 net/hns3: not in enabled drivers build config 00:02:52.495 net/i40e: not in enabled drivers build config 00:02:52.495 net/iavf: not in enabled drivers build config 00:02:52.495 net/ice: not in enabled drivers build config 00:02:52.495 net/idpf: not in enabled drivers build config 00:02:52.495 net/igc: not in enabled drivers build config 00:02:52.495 net/ionic: not in enabled drivers build config 00:02:52.495 net/ipn3ke: not in enabled drivers build config 00:02:52.495 net/ixgbe: not in enabled drivers build config 00:02:52.495 net/mana: not in enabled drivers build config 00:02:52.495 net/memif: not in enabled drivers build config 00:02:52.495 net/mlx4: not in enabled drivers build config 00:02:52.495 net/mlx5: not in enabled drivers build config 00:02:52.495 net/mvneta: not in enabled drivers build config 00:02:52.495 net/mvpp2: not in enabled drivers build config 00:02:52.495 net/netvsc: not in enabled drivers build config 00:02:52.495 net/nfb: not in enabled drivers build config 00:02:52.495 net/nfp: not in enabled drivers build config 00:02:52.495 net/ngbe: not in enabled drivers build config 00:02:52.495 net/null: not in enabled drivers build config 00:02:52.495 net/octeontx: not in enabled drivers build config 00:02:52.495 net/octeon_ep: not in enabled drivers build config 00:02:52.495 net/pcap: not in enabled drivers build config 00:02:52.495 net/pfe: not in enabled drivers build config 00:02:52.495 net/qede: not in enabled drivers build config 00:02:52.495 net/ring: not in enabled drivers build config 00:02:52.495 net/sfc: not in enabled drivers build config 00:02:52.495 net/softnic: not in enabled drivers build config 00:02:52.495 net/tap: not in enabled drivers build config 00:02:52.495 net/thunderx: not in enabled drivers build config 00:02:52.495 net/txgbe: not in enabled drivers build config 00:02:52.495 net/vdev_netvsc: not in enabled drivers build config 00:02:52.495 net/vhost: not in enabled drivers build config 00:02:52.495 net/virtio: not in enabled drivers build config 00:02:52.495 net/vmxnet3: not in enabled drivers build config 00:02:52.495 raw/*: missing internal dependency, "rawdev" 00:02:52.495 crypto/armv8: not in enabled drivers build config 00:02:52.495 crypto/bcmfs: not in enabled drivers build config 00:02:52.495 crypto/caam_jr: not in enabled drivers build config 00:02:52.495 crypto/ccp: not in enabled drivers build config 00:02:52.495 crypto/cnxk: not in enabled drivers build config 00:02:52.495 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.495 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.495 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.495 crypto/mlx5: not in enabled drivers build config 00:02:52.495 crypto/mvsam: not in enabled drivers build config 00:02:52.495 crypto/nitrox: not in enabled drivers build config 00:02:52.495 crypto/null: not in enabled drivers build config 00:02:52.495 crypto/octeontx: not in enabled drivers build config 00:02:52.495 crypto/openssl: not in enabled drivers build config 00:02:52.495 crypto/scheduler: not in enabled drivers build config 00:02:52.495 crypto/uadk: not in enabled drivers build config 00:02:52.495 crypto/virtio: not in enabled drivers build config 00:02:52.495 compress/isal: not in enabled drivers build config 00:02:52.495 compress/mlx5: not in enabled drivers build config 00:02:52.495 compress/octeontx: not in enabled drivers build config 00:02:52.495 compress/zlib: not in enabled drivers build config 00:02:52.495 regex/*: missing internal dependency, "regexdev" 00:02:52.495 ml/*: missing internal dependency, "mldev" 00:02:52.495 vdpa/ifc: not in enabled drivers build config 00:02:52.495 vdpa/mlx5: not in enabled drivers build config 00:02:52.495 vdpa/nfp: not in enabled drivers build config 00:02:52.495 vdpa/sfc: not in enabled drivers build config 00:02:52.495 event/*: missing internal dependency, "eventdev" 00:02:52.495 baseband/*: missing internal dependency, "bbdev" 00:02:52.495 gpu/*: missing internal dependency, "gpudev" 00:02:52.495 00:02:52.495 00:02:52.495 Build targets in project: 85 00:02:52.495 00:02:52.495 DPDK 23.11.0 00:02:52.495 00:02:52.495 User defined options 00:02:52.495 buildtype : debug 00:02:52.495 default_library : shared 00:02:52.495 libdir : lib 00:02:52.495 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:52.495 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:52.495 c_link_args : 00:02:52.495 cpu_instruction_set: native 00:02:52.495 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:52.495 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:52.495 enable_docs : false 00:02:52.495 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:52.495 enable_kmods : false 00:02:52.495 tests : false 00:02:52.495 00:02:52.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.495 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.495 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.495 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.495 [3/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.495 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.495 [5/265] Linking static target lib/librte_kvargs.a 00:02:52.495 [6/265] Linking static target lib/librte_log.a 00:02:52.495 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.495 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.495 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.495 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.495 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.755 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.755 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.755 [14/265] Linking static target lib/librte_telemetry.a 00:02:53.014 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.014 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.014 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.014 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.014 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.014 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.014 [21/265] Linking target lib/librte_log.so.24.0 00:02:53.282 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.282 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.282 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.282 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:53.282 [26/265] Linking target lib/librte_kvargs.so.24.0 00:02:53.620 [27/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.620 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:53.620 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.620 [30/265] Linking target lib/librte_telemetry.so.24.0 00:02:53.889 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.889 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.889 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.889 [34/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:53.889 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.146 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.146 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.146 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.146 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.405 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.405 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.405 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.405 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.405 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.663 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.921 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.921 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.921 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.922 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.180 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.180 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.180 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.439 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.439 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.439 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.439 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.439 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.697 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.955 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.955 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.955 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.955 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.955 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.955 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.955 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.213 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.213 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.470 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.728 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.728 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.728 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.728 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.728 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.728 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.985 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.985 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.985 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.985 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:57.243 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:57.243 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:57.243 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:57.501 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.501 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:57.759 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.759 [85/265] Linking static target lib/librte_eal.a 00:02:57.759 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:58.018 [87/265] Linking static target lib/librte_ring.a 00:02:58.018 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:58.018 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:58.018 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:58.276 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:58.276 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:58.276 [93/265] Linking static target lib/librte_mempool.a 00:02:58.276 [94/265] Linking static target lib/librte_rcu.a 00:02:58.535 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.535 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:58.535 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.535 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:58.793 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.793 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:58.793 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:59.052 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.052 [103/265] Linking static target lib/librte_mbuf.a 00:02:59.311 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.311 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.311 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.311 [107/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.311 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.311 [109/265] Linking static target lib/librte_net.a 00:02:59.569 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.827 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:59.827 [112/265] Linking static target lib/librte_meter.a 00:02:59.827 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:59.827 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.085 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.343 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.343 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.343 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.343 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.910 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.910 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.167 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.167 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:01.425 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.425 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:01.425 [126/265] Linking static target lib/librte_pci.a 00:03:01.425 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:01.425 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.425 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.425 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.683 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.683 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.683 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.683 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.683 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.683 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.683 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.683 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.683 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.683 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.683 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.970 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.970 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.970 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.970 [145/265] Linking static target lib/librte_ethdev.a 00:03:01.970 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.232 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:02.232 [148/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.490 [149/265] Linking static target lib/librte_cmdline.a 00:03:02.490 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.490 [151/265] Linking static target lib/librte_timer.a 00:03:02.490 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.746 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.746 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.746 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.004 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.004 [157/265] Linking static target lib/librte_hash.a 00:03:03.004 [158/265] Linking static target lib/librte_compressdev.a 00:03:03.004 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.262 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.262 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.262 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.520 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.520 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.520 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.520 [166/265] Linking static target lib/librte_dmadev.a 00:03:03.778 [167/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.778 [168/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.778 [169/265] Linking static target lib/librte_cryptodev.a 00:03:03.778 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.778 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:04.037 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.037 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.037 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.037 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.294 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.552 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.552 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.552 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.552 [180/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.552 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.810 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.810 [183/265] Linking static target lib/librte_power.a 00:03:05.068 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.068 [185/265] Linking static target lib/librte_reorder.a 00:03:05.068 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.068 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.068 [188/265] Linking static target lib/librte_security.a 00:03:05.327 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:05.327 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:05.586 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.586 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.844 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.844 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.844 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:06.102 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:06.102 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:06.360 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.360 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:06.360 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:06.618 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.618 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:06.618 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.876 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.876 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.876 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.876 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.876 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.876 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.135 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:07.135 [211/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:07.135 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.135 [213/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.135 [214/265] Linking static target drivers/librte_bus_pci.a 00:03:07.135 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.135 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.135 [217/265] Linking static target drivers/librte_bus_vdev.a 00:03:07.135 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:07.135 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:07.393 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.393 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:07.393 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.393 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:07.393 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:07.651 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.217 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.217 [227/265] Linking static target lib/librte_vhost.a 00:03:09.164 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.164 [229/265] Linking target lib/librte_eal.so.24.0 00:03:09.423 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:09.423 [231/265] Linking target lib/librte_ring.so.24.0 00:03:09.423 [232/265] Linking target lib/librte_pci.so.24.0 00:03:09.423 [233/265] Linking target lib/librte_timer.so.24.0 00:03:09.423 [234/265] Linking target lib/librte_dmadev.so.24.0 00:03:09.423 [235/265] Linking target lib/librte_meter.so.24.0 00:03:09.423 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:09.423 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:09.681 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:09.681 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:09.681 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:09.681 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:09.681 [242/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.681 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:09.681 [244/265] Linking target lib/librte_rcu.so.24.0 00:03:09.681 [245/265] Linking target lib/librte_mempool.so.24.0 00:03:09.681 [246/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.681 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:09.681 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:09.941 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:09.941 [250/265] Linking target lib/librte_mbuf.so.24.0 00:03:09.941 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:10.200 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:03:10.200 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:10.200 [254/265] Linking target lib/librte_compressdev.so.24.0 00:03:10.200 [255/265] Linking target lib/librte_net.so.24.0 00:03:10.200 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:10.200 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:10.200 [258/265] Linking target lib/librte_hash.so.24.0 00:03:10.200 [259/265] Linking target lib/librte_security.so.24.0 00:03:10.200 [260/265] Linking target lib/librte_cmdline.so.24.0 00:03:10.200 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:10.458 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:10.459 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:10.459 [264/265] Linking target lib/librte_power.so.24.0 00:03:10.459 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:10.459 INFO: autodetecting backend as ninja 00:03:10.459 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.851 CC lib/log/log_flags.o 00:03:11.851 CC lib/log/log.o 00:03:11.851 CC lib/log/log_deprecated.o 00:03:11.851 CC lib/ut/ut.o 00:03:11.851 CC lib/ut_mock/mock.o 00:03:11.851 LIB libspdk_ut_mock.a 00:03:11.851 LIB libspdk_log.a 00:03:11.851 SO libspdk_ut_mock.so.6.0 00:03:11.851 LIB libspdk_ut.a 00:03:11.851 SO libspdk_ut.so.2.0 00:03:11.851 SO libspdk_log.so.7.0 00:03:11.851 SYMLINK libspdk_ut_mock.so 00:03:12.109 SYMLINK libspdk_ut.so 00:03:12.109 SYMLINK libspdk_log.so 00:03:12.109 CXX lib/trace_parser/trace.o 00:03:12.109 CC lib/ioat/ioat.o 00:03:12.109 CC lib/dma/dma.o 00:03:12.109 CC lib/util/base64.o 00:03:12.368 CC lib/util/bit_array.o 00:03:12.368 CC lib/util/crc16.o 00:03:12.368 CC lib/util/crc32.o 00:03:12.368 CC lib/util/cpuset.o 00:03:12.368 CC lib/util/crc32c.o 00:03:12.368 CC lib/vfio_user/host/vfio_user_pci.o 00:03:12.368 CC lib/vfio_user/host/vfio_user.o 00:03:12.368 CC lib/util/crc32_ieee.o 00:03:12.368 CC lib/util/crc64.o 00:03:12.368 CC lib/util/dif.o 00:03:12.368 LIB libspdk_dma.a 00:03:12.626 CC lib/util/fd.o 00:03:12.626 LIB libspdk_ioat.a 00:03:12.626 SO libspdk_dma.so.4.0 00:03:12.626 SO libspdk_ioat.so.7.0 00:03:12.626 CC lib/util/file.o 00:03:12.626 CC lib/util/hexlify.o 00:03:12.626 CC lib/util/iov.o 00:03:12.626 SYMLINK libspdk_dma.so 00:03:12.626 CC lib/util/math.o 00:03:12.626 SYMLINK libspdk_ioat.so 00:03:12.626 CC lib/util/pipe.o 00:03:12.626 CC lib/util/strerror_tls.o 00:03:12.626 LIB libspdk_vfio_user.a 00:03:12.626 CC lib/util/string.o 00:03:12.626 SO libspdk_vfio_user.so.5.0 00:03:12.626 CC lib/util/uuid.o 00:03:12.885 SYMLINK libspdk_vfio_user.so 00:03:12.885 CC lib/util/fd_group.o 00:03:12.885 CC lib/util/xor.o 00:03:12.885 CC lib/util/zipf.o 00:03:13.143 LIB libspdk_util.a 00:03:13.143 SO libspdk_util.so.9.0 00:03:13.401 LIB libspdk_trace_parser.a 00:03:13.401 SO libspdk_trace_parser.so.5.0 00:03:13.401 SYMLINK libspdk_util.so 00:03:13.401 SYMLINK libspdk_trace_parser.so 00:03:13.659 CC lib/env_dpdk/env.o 00:03:13.659 CC lib/rdma/common.o 00:03:13.659 CC lib/rdma/rdma_verbs.o 00:03:13.659 CC lib/json/json_parse.o 00:03:13.659 CC lib/env_dpdk/pci.o 00:03:13.659 CC lib/env_dpdk/memory.o 00:03:13.659 CC lib/json/json_util.o 00:03:13.659 CC lib/vmd/vmd.o 00:03:13.659 CC lib/conf/conf.o 00:03:13.659 CC lib/idxd/idxd.o 00:03:13.659 CC lib/vmd/led.o 00:03:13.659 LIB libspdk_conf.a 00:03:13.917 CC lib/json/json_write.o 00:03:13.917 CC lib/env_dpdk/init.o 00:03:13.917 SO libspdk_conf.so.6.0 00:03:13.917 LIB libspdk_rdma.a 00:03:13.917 SO libspdk_rdma.so.6.0 00:03:13.917 SYMLINK libspdk_conf.so 00:03:13.917 CC lib/idxd/idxd_user.o 00:03:13.917 CC lib/env_dpdk/threads.o 00:03:13.917 CC lib/env_dpdk/pci_ioat.o 00:03:13.917 SYMLINK libspdk_rdma.so 00:03:13.917 CC lib/env_dpdk/pci_virtio.o 00:03:14.176 CC lib/env_dpdk/pci_vmd.o 00:03:14.176 CC lib/env_dpdk/pci_idxd.o 00:03:14.176 CC lib/env_dpdk/pci_event.o 00:03:14.176 CC lib/env_dpdk/sigbus_handler.o 00:03:14.176 LIB libspdk_json.a 00:03:14.176 SO libspdk_json.so.6.0 00:03:14.176 LIB libspdk_idxd.a 00:03:14.176 LIB libspdk_vmd.a 00:03:14.176 CC lib/env_dpdk/pci_dpdk.o 00:03:14.176 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:14.176 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:14.176 SO libspdk_vmd.so.6.0 00:03:14.176 SO libspdk_idxd.so.12.0 00:03:14.176 SYMLINK libspdk_json.so 00:03:14.176 SYMLINK libspdk_idxd.so 00:03:14.176 SYMLINK libspdk_vmd.so 00:03:14.435 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.435 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.435 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.435 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.694 LIB libspdk_jsonrpc.a 00:03:14.694 SO libspdk_jsonrpc.so.6.0 00:03:14.953 SYMLINK libspdk_jsonrpc.so 00:03:14.953 LIB libspdk_env_dpdk.a 00:03:14.953 SO libspdk_env_dpdk.so.14.0 00:03:15.212 CC lib/rpc/rpc.o 00:03:15.212 SYMLINK libspdk_env_dpdk.so 00:03:15.470 LIB libspdk_rpc.a 00:03:15.470 SO libspdk_rpc.so.6.0 00:03:15.470 SYMLINK libspdk_rpc.so 00:03:15.728 CC lib/notify/notify_rpc.o 00:03:15.728 CC lib/notify/notify.o 00:03:15.728 CC lib/keyring/keyring.o 00:03:15.728 CC lib/keyring/keyring_rpc.o 00:03:15.728 CC lib/trace/trace.o 00:03:15.728 CC lib/trace/trace_flags.o 00:03:15.728 CC lib/trace/trace_rpc.o 00:03:15.986 LIB libspdk_notify.a 00:03:15.987 SO libspdk_notify.so.6.0 00:03:15.987 LIB libspdk_trace.a 00:03:15.987 LIB libspdk_keyring.a 00:03:15.987 SO libspdk_trace.so.10.0 00:03:15.987 SYMLINK libspdk_notify.so 00:03:15.987 SO libspdk_keyring.so.1.0 00:03:15.987 SYMLINK libspdk_trace.so 00:03:15.987 SYMLINK libspdk_keyring.so 00:03:16.252 CC lib/sock/sock.o 00:03:16.252 CC lib/sock/sock_rpc.o 00:03:16.252 CC lib/thread/thread.o 00:03:16.252 CC lib/thread/iobuf.o 00:03:16.819 LIB libspdk_sock.a 00:03:16.819 SO libspdk_sock.so.9.0 00:03:16.819 SYMLINK libspdk_sock.so 00:03:17.079 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.079 CC lib/nvme/nvme_ctrlr.o 00:03:17.079 CC lib/nvme/nvme_fabric.o 00:03:17.079 CC lib/nvme/nvme_ns_cmd.o 00:03:17.079 CC lib/nvme/nvme_ns.o 00:03:17.079 CC lib/nvme/nvme_pcie_common.o 00:03:17.079 CC lib/nvme/nvme_qpair.o 00:03:17.079 CC lib/nvme/nvme_pcie.o 00:03:17.079 CC lib/nvme/nvme.o 00:03:17.645 LIB libspdk_thread.a 00:03:17.903 SO libspdk_thread.so.10.0 00:03:17.903 SYMLINK libspdk_thread.so 00:03:17.903 CC lib/nvme/nvme_quirks.o 00:03:17.903 CC lib/nvme/nvme_transport.o 00:03:17.903 CC lib/nvme/nvme_discovery.o 00:03:17.903 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.160 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.160 CC lib/nvme/nvme_tcp.o 00:03:18.160 CC lib/nvme/nvme_opal.o 00:03:18.160 CC lib/nvme/nvme_io_msg.o 00:03:18.160 CC lib/nvme/nvme_poll_group.o 00:03:18.725 CC lib/accel/accel.o 00:03:18.725 CC lib/nvme/nvme_zns.o 00:03:18.725 CC lib/accel/accel_rpc.o 00:03:18.725 CC lib/init/json_config.o 00:03:18.725 CC lib/blob/blobstore.o 00:03:18.725 CC lib/virtio/virtio.o 00:03:19.058 CC lib/nvme/nvme_stubs.o 00:03:19.058 CC lib/accel/accel_sw.o 00:03:19.058 CC lib/init/subsystem.o 00:03:19.058 CC lib/init/subsystem_rpc.o 00:03:19.058 CC lib/virtio/virtio_vhost_user.o 00:03:19.316 CC lib/vfu_tgt/tgt_endpoint.o 00:03:19.316 CC lib/vfu_tgt/tgt_rpc.o 00:03:19.316 CC lib/init/rpc.o 00:03:19.316 CC lib/blob/request.o 00:03:19.316 CC lib/blob/zeroes.o 00:03:19.316 LIB libspdk_init.a 00:03:19.574 CC lib/nvme/nvme_auth.o 00:03:19.574 CC lib/virtio/virtio_vfio_user.o 00:03:19.574 SO libspdk_init.so.5.0 00:03:19.574 LIB libspdk_vfu_tgt.a 00:03:19.574 CC lib/nvme/nvme_cuse.o 00:03:19.574 SO libspdk_vfu_tgt.so.3.0 00:03:19.574 CC lib/nvme/nvme_vfio_user.o 00:03:19.574 SYMLINK libspdk_init.so 00:03:19.574 SYMLINK libspdk_vfu_tgt.so 00:03:19.574 CC lib/virtio/virtio_pci.o 00:03:19.574 LIB libspdk_accel.a 00:03:19.574 CC lib/blob/blob_bs_dev.o 00:03:19.574 CC lib/nvme/nvme_rdma.o 00:03:19.574 SO libspdk_accel.so.15.0 00:03:19.833 CC lib/event/app.o 00:03:19.833 CC lib/event/reactor.o 00:03:19.833 SYMLINK libspdk_accel.so 00:03:19.833 CC lib/event/log_rpc.o 00:03:19.833 CC lib/event/app_rpc.o 00:03:19.833 LIB libspdk_virtio.a 00:03:20.091 CC lib/event/scheduler_static.o 00:03:20.091 SO libspdk_virtio.so.7.0 00:03:20.091 SYMLINK libspdk_virtio.so 00:03:20.091 LIB libspdk_event.a 00:03:20.350 SO libspdk_event.so.13.0 00:03:20.350 CC lib/bdev/bdev.o 00:03:20.350 CC lib/bdev/bdev_rpc.o 00:03:20.350 CC lib/bdev/scsi_nvme.o 00:03:20.350 CC lib/bdev/bdev_zone.o 00:03:20.350 CC lib/bdev/part.o 00:03:20.350 SYMLINK libspdk_event.so 00:03:20.917 LIB libspdk_nvme.a 00:03:21.176 SO libspdk_nvme.so.13.0 00:03:21.743 SYMLINK libspdk_nvme.so 00:03:21.743 LIB libspdk_blob.a 00:03:21.743 SO libspdk_blob.so.11.0 00:03:21.743 SYMLINK libspdk_blob.so 00:03:22.001 CC lib/lvol/lvol.o 00:03:22.001 CC lib/blobfs/blobfs.o 00:03:22.001 CC lib/blobfs/tree.o 00:03:22.936 LIB libspdk_bdev.a 00:03:22.936 LIB libspdk_blobfs.a 00:03:22.936 SO libspdk_blobfs.so.10.0 00:03:22.936 SO libspdk_bdev.so.15.0 00:03:22.936 LIB libspdk_lvol.a 00:03:22.936 SO libspdk_lvol.so.10.0 00:03:22.936 SYMLINK libspdk_blobfs.so 00:03:22.936 SYMLINK libspdk_bdev.so 00:03:22.936 SYMLINK libspdk_lvol.so 00:03:23.194 CC lib/nbd/nbd.o 00:03:23.194 CC lib/ublk/ublk.o 00:03:23.194 CC lib/ublk/ublk_rpc.o 00:03:23.194 CC lib/scsi/dev.o 00:03:23.194 CC lib/nbd/nbd_rpc.o 00:03:23.194 CC lib/scsi/lun.o 00:03:23.194 CC lib/nvmf/ctrlr.o 00:03:23.194 CC lib/scsi/port.o 00:03:23.194 CC lib/scsi/scsi.o 00:03:23.194 CC lib/ftl/ftl_core.o 00:03:23.451 CC lib/scsi/scsi_bdev.o 00:03:23.451 CC lib/scsi/scsi_pr.o 00:03:23.451 CC lib/scsi/scsi_rpc.o 00:03:23.451 CC lib/scsi/task.o 00:03:23.451 CC lib/ftl/ftl_init.o 00:03:23.451 CC lib/nvmf/ctrlr_discovery.o 00:03:23.709 CC lib/nvmf/ctrlr_bdev.o 00:03:23.709 CC lib/ftl/ftl_layout.o 00:03:23.709 CC lib/ftl/ftl_debug.o 00:03:23.709 LIB libspdk_nbd.a 00:03:23.709 CC lib/nvmf/subsystem.o 00:03:23.709 SO libspdk_nbd.so.7.0 00:03:23.709 SYMLINK libspdk_nbd.so 00:03:23.709 CC lib/nvmf/nvmf.o 00:03:23.709 CC lib/nvmf/nvmf_rpc.o 00:03:23.968 LIB libspdk_ublk.a 00:03:23.968 SO libspdk_ublk.so.3.0 00:03:23.968 CC lib/ftl/ftl_io.o 00:03:23.968 SYMLINK libspdk_ublk.so 00:03:23.968 LIB libspdk_scsi.a 00:03:23.968 CC lib/ftl/ftl_sb.o 00:03:23.968 CC lib/ftl/ftl_l2p.o 00:03:23.968 CC lib/nvmf/transport.o 00:03:23.968 SO libspdk_scsi.so.9.0 00:03:24.227 SYMLINK libspdk_scsi.so 00:03:24.227 CC lib/nvmf/tcp.o 00:03:24.227 CC lib/nvmf/vfio_user.o 00:03:24.227 CC lib/nvmf/rdma.o 00:03:24.227 CC lib/ftl/ftl_l2p_flat.o 00:03:24.485 CC lib/iscsi/conn.o 00:03:24.485 CC lib/ftl/ftl_nv_cache.o 00:03:24.744 CC lib/ftl/ftl_band.o 00:03:24.744 CC lib/ftl/ftl_band_ops.o 00:03:24.744 CC lib/ftl/ftl_writer.o 00:03:24.744 CC lib/ftl/ftl_rq.o 00:03:24.744 CC lib/ftl/ftl_reloc.o 00:03:25.002 CC lib/ftl/ftl_l2p_cache.o 00:03:25.002 CC lib/ftl/ftl_p2l.o 00:03:25.002 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.002 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.002 CC lib/iscsi/init_grp.o 00:03:25.261 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.261 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.261 CC lib/iscsi/iscsi.o 00:03:25.261 CC lib/iscsi/md5.o 00:03:25.261 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.261 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.518 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.518 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.518 CC lib/iscsi/param.o 00:03:25.518 CC lib/vhost/vhost.o 00:03:25.518 CC lib/vhost/vhost_rpc.o 00:03:25.775 CC lib/vhost/vhost_scsi.o 00:03:25.775 CC lib/vhost/vhost_blk.o 00:03:25.775 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.775 CC lib/vhost/rte_vhost_user.o 00:03:25.775 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.775 CC lib/iscsi/portal_grp.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.033 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.033 CC lib/iscsi/tgt_node.o 00:03:26.033 CC lib/iscsi/iscsi_subsystem.o 00:03:26.292 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.292 LIB libspdk_nvmf.a 00:03:26.292 CC lib/ftl/utils/ftl_conf.o 00:03:26.292 SO libspdk_nvmf.so.18.0 00:03:26.292 CC lib/iscsi/iscsi_rpc.o 00:03:26.550 CC lib/ftl/utils/ftl_md.o 00:03:26.550 CC lib/ftl/utils/ftl_mempool.o 00:03:26.550 SYMLINK libspdk_nvmf.so 00:03:26.550 CC lib/iscsi/task.o 00:03:26.550 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.550 CC lib/ftl/utils/ftl_property.o 00:03:26.550 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.550 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.550 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.809 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.809 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.809 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.809 LIB libspdk_vhost.a 00:03:26.809 LIB libspdk_iscsi.a 00:03:26.809 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.809 SO libspdk_vhost.so.8.0 00:03:26.809 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.809 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.809 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.809 SO libspdk_iscsi.so.8.0 00:03:27.068 CC lib/ftl/base/ftl_base_dev.o 00:03:27.068 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.068 CC lib/ftl/ftl_trace.o 00:03:27.068 SYMLINK libspdk_vhost.so 00:03:27.068 SYMLINK libspdk_iscsi.so 00:03:27.326 LIB libspdk_ftl.a 00:03:27.326 SO libspdk_ftl.so.9.0 00:03:27.893 SYMLINK libspdk_ftl.so 00:03:28.151 CC module/vfu_device/vfu_virtio.o 00:03:28.151 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.151 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.151 CC module/keyring/file/keyring.o 00:03:28.151 CC module/accel/ioat/accel_ioat.o 00:03:28.151 CC module/accel/error/accel_error.o 00:03:28.151 CC module/blob/bdev/blob_bdev.o 00:03:28.151 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.151 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.151 CC module/sock/posix/posix.o 00:03:28.151 LIB libspdk_env_dpdk_rpc.a 00:03:28.151 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.409 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.409 CC module/keyring/file/keyring_rpc.o 00:03:28.409 LIB libspdk_scheduler_gscheduler.a 00:03:28.409 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.409 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.409 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.409 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.409 CC module/accel/error/accel_error_rpc.o 00:03:28.409 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.409 LIB libspdk_scheduler_dynamic.a 00:03:28.409 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.409 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.409 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.409 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.409 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.409 LIB libspdk_blob_bdev.a 00:03:28.409 LIB libspdk_keyring_file.a 00:03:28.409 SO libspdk_blob_bdev.so.11.0 00:03:28.409 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.409 SO libspdk_keyring_file.so.1.0 00:03:28.409 LIB libspdk_accel_ioat.a 00:03:28.409 LIB libspdk_accel_error.a 00:03:28.667 SYMLINK libspdk_blob_bdev.so 00:03:28.667 SO libspdk_accel_ioat.so.6.0 00:03:28.667 SO libspdk_accel_error.so.2.0 00:03:28.667 SYMLINK libspdk_keyring_file.so 00:03:28.667 SYMLINK libspdk_accel_ioat.so 00:03:28.667 SYMLINK libspdk_accel_error.so 00:03:28.667 CC module/accel/dsa/accel_dsa.o 00:03:28.667 CC module/accel/iaa/accel_iaa.o 00:03:28.667 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.944 LIB libspdk_vfu_device.a 00:03:28.944 SO libspdk_vfu_device.so.3.0 00:03:28.944 CC module/bdev/gpt/gpt.o 00:03:28.944 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.944 CC module/bdev/delay/vbdev_delay.o 00:03:28.944 CC module/bdev/error/vbdev_error.o 00:03:28.944 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.944 LIB libspdk_sock_posix.a 00:03:28.944 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.944 LIB libspdk_accel_dsa.a 00:03:28.944 SO libspdk_sock_posix.so.6.0 00:03:28.944 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.944 SYMLINK libspdk_vfu_device.so 00:03:28.944 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.944 SO libspdk_accel_dsa.so.5.0 00:03:28.944 SYMLINK libspdk_sock_posix.so 00:03:28.944 SYMLINK libspdk_accel_dsa.so 00:03:28.944 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.944 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.203 LIB libspdk_accel_iaa.a 00:03:29.203 SO libspdk_accel_iaa.so.3.0 00:03:29.203 LIB libspdk_bdev_error.a 00:03:29.203 SO libspdk_bdev_error.so.6.0 00:03:29.203 SYMLINK libspdk_accel_iaa.so 00:03:29.203 CC module/bdev/malloc/bdev_malloc.o 00:03:29.203 LIB libspdk_blobfs_bdev.a 00:03:29.203 CC module/bdev/null/bdev_null.o 00:03:29.203 LIB libspdk_bdev_gpt.a 00:03:29.203 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.203 LIB libspdk_bdev_delay.a 00:03:29.203 SO libspdk_blobfs_bdev.so.6.0 00:03:29.203 SO libspdk_bdev_gpt.so.6.0 00:03:29.203 SYMLINK libspdk_bdev_error.so 00:03:29.203 CC module/bdev/nvme/bdev_nvme.o 00:03:29.203 SO libspdk_bdev_delay.so.6.0 00:03:29.461 SYMLINK libspdk_blobfs_bdev.so 00:03:29.461 SYMLINK libspdk_bdev_gpt.so 00:03:29.461 SYMLINK libspdk_bdev_delay.so 00:03:29.461 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.461 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.461 CC module/bdev/raid/bdev_raid.o 00:03:29.461 CC module/bdev/null/bdev_null_rpc.o 00:03:29.461 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.461 CC module/bdev/split/vbdev_split.o 00:03:29.461 CC module/bdev/aio/bdev_aio.o 00:03:29.461 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.461 LIB libspdk_bdev_lvol.a 00:03:29.763 SO libspdk_bdev_lvol.so.6.0 00:03:29.763 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.763 SYMLINK libspdk_bdev_lvol.so 00:03:29.763 LIB libspdk_bdev_passthru.a 00:03:29.763 LIB libspdk_bdev_null.a 00:03:29.763 SO libspdk_bdev_passthru.so.6.0 00:03:29.763 SO libspdk_bdev_null.so.6.0 00:03:29.763 LIB libspdk_bdev_split.a 00:03:29.763 SYMLINK libspdk_bdev_passthru.so 00:03:29.763 SO libspdk_bdev_split.so.6.0 00:03:29.763 SYMLINK libspdk_bdev_null.so 00:03:29.763 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.763 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.763 LIB libspdk_bdev_malloc.a 00:03:29.763 CC module/bdev/ftl/bdev_ftl.o 00:03:29.763 SYMLINK libspdk_bdev_split.so 00:03:29.763 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.763 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.022 SO libspdk_bdev_malloc.so.6.0 00:03:30.022 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.022 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.022 SYMLINK libspdk_bdev_malloc.so 00:03:30.022 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.022 LIB libspdk_bdev_zone_block.a 00:03:30.022 LIB libspdk_bdev_aio.a 00:03:30.022 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.022 SO libspdk_bdev_zone_block.so.6.0 00:03:30.022 SO libspdk_bdev_aio.so.6.0 00:03:30.022 SYMLINK libspdk_bdev_zone_block.so 00:03:30.022 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.022 SYMLINK libspdk_bdev_aio.so 00:03:30.022 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.280 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.280 LIB libspdk_bdev_ftl.a 00:03:30.280 SO libspdk_bdev_ftl.so.6.0 00:03:30.280 CC module/bdev/nvme/nvme_rpc.o 00:03:30.280 SYMLINK libspdk_bdev_ftl.so 00:03:30.280 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.280 LIB libspdk_bdev_iscsi.a 00:03:30.280 CC module/bdev/raid/raid0.o 00:03:30.280 SO libspdk_bdev_iscsi.so.6.0 00:03:30.280 CC module/bdev/nvme/vbdev_opal.o 00:03:30.280 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.538 SYMLINK libspdk_bdev_iscsi.so 00:03:30.538 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.538 CC module/bdev/raid/raid1.o 00:03:30.538 LIB libspdk_bdev_virtio.a 00:03:30.538 CC module/bdev/raid/concat.o 00:03:30.538 SO libspdk_bdev_virtio.so.6.0 00:03:30.538 SYMLINK libspdk_bdev_virtio.so 00:03:30.797 LIB libspdk_bdev_raid.a 00:03:30.797 SO libspdk_bdev_raid.so.6.0 00:03:30.797 SYMLINK libspdk_bdev_raid.so 00:03:31.365 LIB libspdk_bdev_nvme.a 00:03:31.624 SO libspdk_bdev_nvme.so.7.0 00:03:31.624 SYMLINK libspdk_bdev_nvme.so 00:03:32.223 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.223 CC module/event/subsystems/sock/sock.o 00:03:32.223 CC module/event/subsystems/vmd/vmd.o 00:03:32.223 CC module/event/subsystems/keyring/keyring.o 00:03:32.223 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.223 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.223 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.223 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.223 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:32.223 LIB libspdk_event_keyring.a 00:03:32.223 LIB libspdk_event_vhost_blk.a 00:03:32.223 LIB libspdk_event_sock.a 00:03:32.223 LIB libspdk_event_scheduler.a 00:03:32.223 LIB libspdk_event_vfu_tgt.a 00:03:32.223 LIB libspdk_event_vmd.a 00:03:32.223 LIB libspdk_event_iobuf.a 00:03:32.223 SO libspdk_event_keyring.so.1.0 00:03:32.223 SO libspdk_event_sock.so.5.0 00:03:32.223 SO libspdk_event_vhost_blk.so.3.0 00:03:32.223 SO libspdk_event_vfu_tgt.so.3.0 00:03:32.223 SO libspdk_event_scheduler.so.4.0 00:03:32.223 SO libspdk_event_vmd.so.6.0 00:03:32.223 SO libspdk_event_iobuf.so.3.0 00:03:32.481 SYMLINK libspdk_event_vfu_tgt.so 00:03:32.481 SYMLINK libspdk_event_keyring.so 00:03:32.481 SYMLINK libspdk_event_sock.so 00:03:32.481 SYMLINK libspdk_event_vhost_blk.so 00:03:32.481 SYMLINK libspdk_event_scheduler.so 00:03:32.481 SYMLINK libspdk_event_vmd.so 00:03:32.481 SYMLINK libspdk_event_iobuf.so 00:03:32.740 CC module/event/subsystems/accel/accel.o 00:03:32.740 LIB libspdk_event_accel.a 00:03:32.999 SO libspdk_event_accel.so.6.0 00:03:32.999 SYMLINK libspdk_event_accel.so 00:03:33.258 CC module/event/subsystems/bdev/bdev.o 00:03:33.258 LIB libspdk_event_bdev.a 00:03:33.516 SO libspdk_event_bdev.so.6.0 00:03:33.516 SYMLINK libspdk_event_bdev.so 00:03:33.775 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.775 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.775 CC module/event/subsystems/nbd/nbd.o 00:03:33.775 CC module/event/subsystems/ublk/ublk.o 00:03:33.775 CC module/event/subsystems/scsi/scsi.o 00:03:33.775 LIB libspdk_event_ublk.a 00:03:33.775 LIB libspdk_event_scsi.a 00:03:33.775 LIB libspdk_event_nbd.a 00:03:33.775 SO libspdk_event_ublk.so.3.0 00:03:34.034 SO libspdk_event_scsi.so.6.0 00:03:34.034 SO libspdk_event_nbd.so.6.0 00:03:34.034 SYMLINK libspdk_event_ublk.so 00:03:34.034 LIB libspdk_event_nvmf.a 00:03:34.034 SYMLINK libspdk_event_scsi.so 00:03:34.034 SYMLINK libspdk_event_nbd.so 00:03:34.034 SO libspdk_event_nvmf.so.6.0 00:03:34.034 SYMLINK libspdk_event_nvmf.so 00:03:34.293 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.293 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.293 LIB libspdk_event_vhost_scsi.a 00:03:34.551 LIB libspdk_event_iscsi.a 00:03:34.551 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.551 SO libspdk_event_iscsi.so.6.0 00:03:34.551 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.551 SYMLINK libspdk_event_iscsi.so 00:03:34.809 SO libspdk.so.6.0 00:03:34.809 SYMLINK libspdk.so 00:03:34.809 CXX app/trace/trace.o 00:03:34.809 TEST_HEADER include/spdk/accel.h 00:03:34.809 TEST_HEADER include/spdk/accel_module.h 00:03:34.809 TEST_HEADER include/spdk/assert.h 00:03:34.809 TEST_HEADER include/spdk/barrier.h 00:03:34.809 TEST_HEADER include/spdk/base64.h 00:03:34.809 TEST_HEADER include/spdk/bdev.h 00:03:35.068 TEST_HEADER include/spdk/bdev_module.h 00:03:35.068 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.068 TEST_HEADER include/spdk/bit_array.h 00:03:35.068 TEST_HEADER include/spdk/bit_pool.h 00:03:35.068 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.068 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.068 TEST_HEADER include/spdk/blobfs.h 00:03:35.068 TEST_HEADER include/spdk/blob.h 00:03:35.068 TEST_HEADER include/spdk/conf.h 00:03:35.068 TEST_HEADER include/spdk/config.h 00:03:35.068 TEST_HEADER include/spdk/cpuset.h 00:03:35.068 TEST_HEADER include/spdk/crc16.h 00:03:35.068 TEST_HEADER include/spdk/crc32.h 00:03:35.068 TEST_HEADER include/spdk/crc64.h 00:03:35.068 TEST_HEADER include/spdk/dif.h 00:03:35.068 TEST_HEADER include/spdk/dma.h 00:03:35.068 TEST_HEADER include/spdk/endian.h 00:03:35.068 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.068 TEST_HEADER include/spdk/env.h 00:03:35.068 TEST_HEADER include/spdk/event.h 00:03:35.068 TEST_HEADER include/spdk/fd_group.h 00:03:35.068 TEST_HEADER include/spdk/fd.h 00:03:35.068 TEST_HEADER include/spdk/file.h 00:03:35.068 TEST_HEADER include/spdk/ftl.h 00:03:35.068 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.068 TEST_HEADER include/spdk/hexlify.h 00:03:35.068 TEST_HEADER include/spdk/histogram_data.h 00:03:35.068 TEST_HEADER include/spdk/idxd.h 00:03:35.068 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.068 TEST_HEADER include/spdk/init.h 00:03:35.068 TEST_HEADER include/spdk/ioat.h 00:03:35.068 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.068 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.068 TEST_HEADER include/spdk/json.h 00:03:35.068 CC test/event/event_perf/event_perf.o 00:03:35.068 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.068 TEST_HEADER include/spdk/keyring.h 00:03:35.068 TEST_HEADER include/spdk/keyring_module.h 00:03:35.068 TEST_HEADER include/spdk/likely.h 00:03:35.068 CC examples/accel/perf/accel_perf.o 00:03:35.068 TEST_HEADER include/spdk/log.h 00:03:35.068 TEST_HEADER include/spdk/lvol.h 00:03:35.068 TEST_HEADER include/spdk/memory.h 00:03:35.068 TEST_HEADER include/spdk/mmio.h 00:03:35.068 CC test/dma/test_dma/test_dma.o 00:03:35.068 TEST_HEADER include/spdk/nbd.h 00:03:35.068 TEST_HEADER include/spdk/notify.h 00:03:35.068 TEST_HEADER include/spdk/nvme.h 00:03:35.068 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.068 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.068 CC test/bdev/bdevio/bdevio.o 00:03:35.068 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.068 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.068 CC test/app/bdev_svc/bdev_svc.o 00:03:35.068 CC test/accel/dif/dif.o 00:03:35.068 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.068 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.068 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.068 TEST_HEADER include/spdk/nvmf.h 00:03:35.068 CC test/blobfs/mkfs/mkfs.o 00:03:35.068 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.068 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.068 TEST_HEADER include/spdk/opal.h 00:03:35.068 TEST_HEADER include/spdk/opal_spec.h 00:03:35.068 TEST_HEADER include/spdk/pci_ids.h 00:03:35.068 TEST_HEADER include/spdk/pipe.h 00:03:35.068 TEST_HEADER include/spdk/queue.h 00:03:35.068 TEST_HEADER include/spdk/reduce.h 00:03:35.068 TEST_HEADER include/spdk/rpc.h 00:03:35.068 TEST_HEADER include/spdk/scheduler.h 00:03:35.068 TEST_HEADER include/spdk/scsi.h 00:03:35.068 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.068 TEST_HEADER include/spdk/sock.h 00:03:35.068 TEST_HEADER include/spdk/stdinc.h 00:03:35.068 TEST_HEADER include/spdk/string.h 00:03:35.068 TEST_HEADER include/spdk/thread.h 00:03:35.068 TEST_HEADER include/spdk/trace.h 00:03:35.068 TEST_HEADER include/spdk/trace_parser.h 00:03:35.068 TEST_HEADER include/spdk/tree.h 00:03:35.068 TEST_HEADER include/spdk/ublk.h 00:03:35.068 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.068 TEST_HEADER include/spdk/util.h 00:03:35.068 TEST_HEADER include/spdk/uuid.h 00:03:35.068 TEST_HEADER include/spdk/version.h 00:03:35.068 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.068 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.068 TEST_HEADER include/spdk/vhost.h 00:03:35.068 TEST_HEADER include/spdk/vmd.h 00:03:35.068 TEST_HEADER include/spdk/xor.h 00:03:35.068 TEST_HEADER include/spdk/zipf.h 00:03:35.068 CXX test/cpp_headers/accel.o 00:03:35.068 LINK event_perf 00:03:35.326 LINK bdev_svc 00:03:35.326 LINK mkfs 00:03:35.326 CXX test/cpp_headers/accel_module.o 00:03:35.326 LINK spdk_trace 00:03:35.326 CC test/event/reactor/reactor.o 00:03:35.585 LINK bdevio 00:03:35.585 LINK dif 00:03:35.585 LINK test_dma 00:03:35.585 LINK accel_perf 00:03:35.585 CXX test/cpp_headers/assert.o 00:03:35.585 LINK reactor 00:03:35.585 CC test/app/histogram_perf/histogram_perf.o 00:03:35.585 CC app/trace_record/trace_record.o 00:03:35.585 CXX test/cpp_headers/barrier.o 00:03:35.585 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.842 LINK mem_callbacks 00:03:35.842 LINK histogram_perf 00:03:35.842 CC test/app/jsoncat/jsoncat.o 00:03:35.842 CC test/event/reactor_perf/reactor_perf.o 00:03:35.842 CXX test/cpp_headers/base64.o 00:03:35.842 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.842 LINK spdk_trace_record 00:03:35.842 CC examples/ioat/perf/perf.o 00:03:35.842 CC examples/blob/hello_world/hello_blob.o 00:03:35.842 LINK jsoncat 00:03:36.100 CC test/env/vtophys/vtophys.o 00:03:36.100 LINK reactor_perf 00:03:36.100 CC examples/ioat/verify/verify.o 00:03:36.100 CXX test/cpp_headers/bdev.o 00:03:36.100 LINK nvme_fuzz 00:03:36.100 LINK ioat_perf 00:03:36.100 LINK hello_bdev 00:03:36.100 LINK vtophys 00:03:36.100 LINK hello_blob 00:03:36.100 CC app/nvmf_tgt/nvmf_main.o 00:03:36.100 CXX test/cpp_headers/bdev_module.o 00:03:36.360 LINK verify 00:03:36.360 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.360 CC test/event/app_repeat/app_repeat.o 00:03:36.360 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:36.360 LINK nvmf_tgt 00:03:36.360 CXX test/cpp_headers/bdev_zone.o 00:03:36.360 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.360 LINK app_repeat 00:03:36.618 CC app/spdk_tgt/spdk_tgt.o 00:03:36.618 LINK iscsi_tgt 00:03:36.618 CC app/spdk_lspci/spdk_lspci.o 00:03:36.618 CC examples/blob/cli/blobcli.o 00:03:36.618 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.618 LINK env_dpdk_post_init 00:03:36.618 CXX test/cpp_headers/bit_array.o 00:03:36.618 LINK spdk_lspci 00:03:36.618 LINK spdk_tgt 00:03:36.877 CXX test/cpp_headers/bit_pool.o 00:03:36.877 CC test/event/scheduler/scheduler.o 00:03:36.877 CC examples/nvme/hello_world/hello_world.o 00:03:36.877 CXX test/cpp_headers/blob_bdev.o 00:03:36.877 CC test/env/memory/memory_ut.o 00:03:36.877 CC examples/sock/hello_world/hello_sock.o 00:03:37.137 CC app/spdk_nvme_perf/perf.o 00:03:37.137 LINK blobcli 00:03:37.137 CC app/spdk_nvme_identify/identify.o 00:03:37.137 CXX test/cpp_headers/blobfs_bdev.o 00:03:37.137 LINK scheduler 00:03:37.137 LINK hello_world 00:03:37.137 LINK hello_sock 00:03:37.137 CXX test/cpp_headers/blobfs.o 00:03:37.137 CXX test/cpp_headers/blob.o 00:03:37.395 LINK bdevperf 00:03:37.395 CXX test/cpp_headers/conf.o 00:03:37.395 CC examples/nvme/reconnect/reconnect.o 00:03:37.395 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.395 CXX test/cpp_headers/config.o 00:03:37.395 CXX test/cpp_headers/cpuset.o 00:03:37.395 CC app/spdk_top/spdk_top.o 00:03:37.653 CC app/vhost/vhost.o 00:03:37.653 LINK spdk_nvme_discover 00:03:37.653 CC app/spdk_dd/spdk_dd.o 00:03:37.653 CXX test/cpp_headers/crc16.o 00:03:37.653 LINK reconnect 00:03:37.912 LINK memory_ut 00:03:37.912 LINK spdk_nvme_perf 00:03:37.912 LINK vhost 00:03:37.912 LINK spdk_nvme_identify 00:03:37.912 CXX test/cpp_headers/crc32.o 00:03:37.912 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.912 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.912 CXX test/cpp_headers/crc64.o 00:03:37.912 LINK iscsi_fuzz 00:03:38.171 LINK spdk_dd 00:03:38.171 CC examples/nvme/arbitration/arbitration.o 00:03:38.171 LINK lsvmd 00:03:38.171 CC examples/nvme/hotplug/hotplug.o 00:03:38.171 CC test/env/pci/pci_ut.o 00:03:38.171 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.171 CXX test/cpp_headers/dif.o 00:03:38.441 LINK cmb_copy 00:03:38.441 CC examples/nvme/abort/abort.o 00:03:38.441 LINK hotplug 00:03:38.441 CC examples/vmd/led/led.o 00:03:38.441 LINK spdk_top 00:03:38.441 CXX test/cpp_headers/dma.o 00:03:38.441 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.441 LINK arbitration 00:03:38.441 LINK pci_ut 00:03:38.441 LINK led 00:03:38.441 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.712 CXX test/cpp_headers/endian.o 00:03:38.712 LINK nvme_manage 00:03:38.712 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.712 CC app/fio/nvme/fio_plugin.o 00:03:38.712 LINK abort 00:03:38.712 CXX test/cpp_headers/env_dpdk.o 00:03:38.712 CC examples/util/zipf/zipf.o 00:03:38.712 CC examples/nvmf/nvmf/nvmf.o 00:03:38.712 LINK pmr_persistence 00:03:38.712 CC app/fio/bdev/fio_plugin.o 00:03:38.970 LINK zipf 00:03:38.970 CXX test/cpp_headers/env.o 00:03:38.970 LINK vhost_fuzz 00:03:38.970 CXX test/cpp_headers/event.o 00:03:38.970 CC test/app/stub/stub.o 00:03:38.970 CC examples/thread/thread/thread_ex.o 00:03:38.970 CC test/lvol/esnap/esnap.o 00:03:38.970 LINK nvmf 00:03:39.228 CXX test/cpp_headers/fd_group.o 00:03:39.228 CXX test/cpp_headers/fd.o 00:03:39.228 LINK stub 00:03:39.228 LINK thread 00:03:39.228 CXX test/cpp_headers/file.o 00:03:39.228 LINK spdk_nvme 00:03:39.228 CXX test/cpp_headers/ftl.o 00:03:39.228 LINK spdk_bdev 00:03:39.228 CC examples/idxd/perf/perf.o 00:03:39.487 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.487 CXX test/cpp_headers/gpt_spec.o 00:03:39.487 CC test/rpc_client/rpc_client_test.o 00:03:39.487 CXX test/cpp_headers/hexlify.o 00:03:39.487 CC test/nvme/aer/aer.o 00:03:39.487 CC test/nvme/reset/reset.o 00:03:39.745 LINK interrupt_tgt 00:03:39.745 CC test/thread/poller_perf/poller_perf.o 00:03:39.745 CXX test/cpp_headers/histogram_data.o 00:03:39.745 LINK rpc_client_test 00:03:39.745 LINK idxd_perf 00:03:39.745 CC test/nvme/sgl/sgl.o 00:03:39.745 LINK poller_perf 00:03:39.745 CXX test/cpp_headers/idxd.o 00:03:39.745 LINK aer 00:03:39.745 LINK reset 00:03:40.004 CXX test/cpp_headers/idxd_spec.o 00:03:40.004 CC test/nvme/e2edp/nvme_dp.o 00:03:40.004 CC test/nvme/overhead/overhead.o 00:03:40.004 CC test/nvme/err_injection/err_injection.o 00:03:40.004 CXX test/cpp_headers/init.o 00:03:40.004 LINK sgl 00:03:40.004 CC test/nvme/startup/startup.o 00:03:40.262 CC test/nvme/reserve/reserve.o 00:03:40.262 CC test/nvme/simple_copy/simple_copy.o 00:03:40.262 LINK overhead 00:03:40.262 CXX test/cpp_headers/ioat.o 00:03:40.262 LINK err_injection 00:03:40.262 LINK nvme_dp 00:03:40.262 LINK startup 00:03:40.262 CC test/nvme/connect_stress/connect_stress.o 00:03:40.262 LINK reserve 00:03:40.520 LINK simple_copy 00:03:40.520 CXX test/cpp_headers/ioat_spec.o 00:03:40.520 LINK connect_stress 00:03:40.520 CXX test/cpp_headers/iscsi_spec.o 00:03:40.779 CC test/nvme/compliance/nvme_compliance.o 00:03:40.779 CC test/nvme/boot_partition/boot_partition.o 00:03:40.779 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.779 CXX test/cpp_headers/json.o 00:03:40.779 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.779 CC test/nvme/fdp/fdp.o 00:03:40.779 CXX test/cpp_headers/jsonrpc.o 00:03:40.779 LINK boot_partition 00:03:40.779 CC test/nvme/cuse/cuse.o 00:03:40.779 CXX test/cpp_headers/keyring.o 00:03:41.037 LINK fused_ordering 00:03:41.037 LINK doorbell_aers 00:03:41.037 CXX test/cpp_headers/keyring_module.o 00:03:41.037 LINK nvme_compliance 00:03:41.037 CXX test/cpp_headers/likely.o 00:03:41.037 CXX test/cpp_headers/log.o 00:03:41.037 CXX test/cpp_headers/lvol.o 00:03:41.037 LINK fdp 00:03:41.037 CXX test/cpp_headers/memory.o 00:03:41.295 CXX test/cpp_headers/mmio.o 00:03:41.295 CXX test/cpp_headers/nbd.o 00:03:41.295 CXX test/cpp_headers/notify.o 00:03:41.295 CXX test/cpp_headers/nvme.o 00:03:41.295 CXX test/cpp_headers/nvme_intel.o 00:03:41.295 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.295 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.295 CXX test/cpp_headers/nvme_spec.o 00:03:41.295 CXX test/cpp_headers/nvme_zns.o 00:03:41.295 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.553 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.553 CXX test/cpp_headers/nvmf.o 00:03:41.553 CXX test/cpp_headers/nvmf_spec.o 00:03:41.553 CXX test/cpp_headers/nvmf_transport.o 00:03:41.553 CXX test/cpp_headers/opal.o 00:03:41.553 CXX test/cpp_headers/opal_spec.o 00:03:41.553 CXX test/cpp_headers/pci_ids.o 00:03:41.553 CXX test/cpp_headers/pipe.o 00:03:41.553 CXX test/cpp_headers/queue.o 00:03:41.553 CXX test/cpp_headers/reduce.o 00:03:41.811 CXX test/cpp_headers/scheduler.o 00:03:41.811 CXX test/cpp_headers/rpc.o 00:03:41.811 CXX test/cpp_headers/scsi.o 00:03:41.811 CXX test/cpp_headers/scsi_spec.o 00:03:41.811 CXX test/cpp_headers/sock.o 00:03:41.811 CXX test/cpp_headers/stdinc.o 00:03:41.811 CXX test/cpp_headers/string.o 00:03:41.811 CXX test/cpp_headers/thread.o 00:03:41.811 CXX test/cpp_headers/trace.o 00:03:42.068 CXX test/cpp_headers/trace_parser.o 00:03:42.068 CXX test/cpp_headers/tree.o 00:03:42.068 LINK cuse 00:03:42.068 CXX test/cpp_headers/ublk.o 00:03:42.068 CXX test/cpp_headers/util.o 00:03:42.068 CXX test/cpp_headers/uuid.o 00:03:42.068 CXX test/cpp_headers/version.o 00:03:42.068 CXX test/cpp_headers/vfio_user_pci.o 00:03:42.068 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.068 CXX test/cpp_headers/vhost.o 00:03:42.327 CXX test/cpp_headers/vmd.o 00:03:42.327 CXX test/cpp_headers/xor.o 00:03:42.327 CXX test/cpp_headers/zipf.o 00:03:44.250 LINK esnap 00:03:46.180 00:03:46.180 real 1m6.967s 00:03:46.180 user 6m58.904s 00:03:46.180 sys 1m30.536s 00:03:46.180 17:07:15 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:46.180 17:07:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:46.180 ************************************ 00:03:46.180 END TEST make 00:03:46.180 ************************************ 00:03:46.180 17:07:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.180 17:07:15 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:46.180 17:07:15 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:46.180 17:07:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.180 17:07:15 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.180 17:07:15 -- pm/common@45 -- $ pid=5198 00:03:46.180 17:07:15 -- pm/common@52 -- $ sudo kill -TERM 5198 00:03:46.180 17:07:15 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.180 17:07:15 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.180 17:07:15 -- pm/common@45 -- $ pid=5197 00:03:46.180 17:07:15 -- pm/common@52 -- $ sudo kill -TERM 5197 00:03:46.180 17:07:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:46.180 17:07:15 -- nvmf/common.sh@7 -- # uname -s 00:03:46.180 17:07:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.180 17:07:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.180 17:07:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.180 17:07:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.180 17:07:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.180 17:07:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.180 17:07:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.180 17:07:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.180 17:07:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.180 17:07:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.180 17:07:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:03:46.180 17:07:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:03:46.180 17:07:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.180 17:07:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.180 17:07:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:46.180 17:07:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.180 17:07:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:46.180 17:07:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.180 17:07:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.180 17:07:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.180 17:07:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.180 17:07:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.180 17:07:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.180 17:07:15 -- paths/export.sh@5 -- # export PATH 00:03:46.180 17:07:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.180 17:07:15 -- nvmf/common.sh@47 -- # : 0 00:03:46.180 17:07:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:46.180 17:07:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:46.180 17:07:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.180 17:07:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.180 17:07:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.180 17:07:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:46.180 17:07:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:46.180 17:07:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:46.180 17:07:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.180 17:07:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.180 17:07:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.180 17:07:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.180 17:07:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.180 17:07:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.180 17:07:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:46.180 17:07:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.180 17:07:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.180 17:07:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.180 17:07:16 -- spdk/autotest.sh@48 -- # udevadm_pid=54440 00:03:46.180 17:07:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.180 17:07:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.180 17:07:16 -- pm/common@17 -- # local monitor 00:03:46.180 17:07:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.180 17:07:16 -- pm/common@21 -- # date +%s 00:03:46.180 17:07:16 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54443 00:03:46.180 17:07:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.180 17:07:16 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54450 00:03:46.180 17:07:16 -- pm/common@26 -- # sleep 1 00:03:46.180 17:07:16 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714064836 00:03:46.180 17:07:16 -- pm/common@21 -- # date +%s 00:03:46.180 17:07:16 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714064836 00:03:46.180 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714064836_collect-cpu-load.pm.log 00:03:46.180 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714064836_collect-vmstat.pm.log 00:03:47.115 17:07:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.116 17:07:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.116 17:07:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:47.116 17:07:17 -- common/autotest_common.sh@10 -- # set +x 00:03:47.116 17:07:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.116 17:07:17 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:47.116 17:07:17 -- common/autotest_common.sh@10 -- # set +x 00:03:47.116 17:07:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:47.116 17:07:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:47.116 17:07:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:47.116 17:07:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:47.116 17:07:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:47.116 17:07:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.116 17:07:17 -- common/autotest_common.sh@1441 -- # uname 00:03:47.116 17:07:17 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:47.116 17:07:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.116 17:07:17 -- common/autotest_common.sh@1461 -- # uname 00:03:47.116 17:07:17 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:47.116 17:07:17 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:47.374 17:07:17 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:47.374 17:07:17 -- spdk/autotest.sh@72 -- # hash lcov 00:03:47.374 17:07:17 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:47.374 17:07:17 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:47.374 --rc lcov_branch_coverage=1 00:03:47.374 --rc lcov_function_coverage=1 00:03:47.374 --rc genhtml_branch_coverage=1 00:03:47.374 --rc genhtml_function_coverage=1 00:03:47.374 --rc genhtml_legend=1 00:03:47.374 --rc geninfo_all_blocks=1 00:03:47.374 ' 00:03:47.374 17:07:17 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:47.374 --rc lcov_branch_coverage=1 00:03:47.374 --rc lcov_function_coverage=1 00:03:47.375 --rc genhtml_branch_coverage=1 00:03:47.375 --rc genhtml_function_coverage=1 00:03:47.375 --rc genhtml_legend=1 00:03:47.375 --rc geninfo_all_blocks=1 00:03:47.375 ' 00:03:47.375 17:07:17 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:47.375 --rc lcov_branch_coverage=1 00:03:47.375 --rc lcov_function_coverage=1 00:03:47.375 --rc genhtml_branch_coverage=1 00:03:47.375 --rc genhtml_function_coverage=1 00:03:47.375 --rc genhtml_legend=1 00:03:47.375 --rc geninfo_all_blocks=1 00:03:47.375 --no-external' 00:03:47.375 17:07:17 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:47.375 --rc lcov_branch_coverage=1 00:03:47.375 --rc lcov_function_coverage=1 00:03:47.375 --rc genhtml_branch_coverage=1 00:03:47.375 --rc genhtml_function_coverage=1 00:03:47.375 --rc genhtml_legend=1 00:03:47.375 --rc geninfo_all_blocks=1 00:03:47.375 --no-external' 00:03:47.375 17:07:17 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:47.375 lcov: LCOV version 1.14 00:03:47.375 17:07:17 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:55.495 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:55.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:55.495 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:55.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:55.495 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:55.495 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:00.765 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:00.765 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.984 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:12.984 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:12.985 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.985 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:15.520 17:07:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:15.520 17:07:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:15.520 17:07:45 -- common/autotest_common.sh@10 -- # set +x 00:04:15.520 17:07:45 -- spdk/autotest.sh@91 -- # rm -f 00:04:15.520 17:07:45 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.039 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:16.039 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:16.039 17:07:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:16.039 17:07:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:16.039 17:07:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:16.039 17:07:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:16.039 17:07:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.039 17:07:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:16.039 17:07:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:16.039 17:07:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.039 17:07:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:16.039 17:07:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:16.039 17:07:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.039 17:07:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:16.039 17:07:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:16.039 17:07:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:16.039 17:07:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:16.039 17:07:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:16.039 17:07:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.039 17:07:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:16.039 17:07:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:16.039 17:07:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.039 17:07:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.039 17:07:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:16.039 17:07:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:16.039 17:07:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.039 No valid GPT data, bailing 00:04:16.039 17:07:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.039 17:07:45 -- scripts/common.sh@391 -- # pt= 00:04:16.039 17:07:45 -- scripts/common.sh@392 -- # return 1 00:04:16.039 17:07:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.039 1+0 records in 00:04:16.039 1+0 records out 00:04:16.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471905 s, 222 MB/s 00:04:16.039 17:07:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.039 17:07:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.039 17:07:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:16.039 17:07:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:16.039 17:07:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.039 No valid GPT data, bailing 00:04:16.039 17:07:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.039 17:07:45 -- scripts/common.sh@391 -- # pt= 00:04:16.039 17:07:45 -- scripts/common.sh@392 -- # return 1 00:04:16.039 17:07:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.039 1+0 records in 00:04:16.039 1+0 records out 00:04:16.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464806 s, 226 MB/s 00:04:16.039 17:07:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.039 17:07:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.039 17:07:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:16.039 17:07:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:16.039 17:07:45 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:16.039 No valid GPT data, bailing 00:04:16.298 17:07:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.298 17:07:46 -- scripts/common.sh@391 -- # pt= 00:04:16.298 17:07:46 -- scripts/common.sh@392 -- # return 1 00:04:16.298 17:07:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:16.298 1+0 records in 00:04:16.298 1+0 records out 00:04:16.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00350047 s, 300 MB/s 00:04:16.298 17:07:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.298 17:07:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.298 17:07:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:16.298 17:07:46 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:16.298 17:07:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:16.298 No valid GPT data, bailing 00:04:16.298 17:07:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.298 17:07:46 -- scripts/common.sh@391 -- # pt= 00:04:16.298 17:07:46 -- scripts/common.sh@392 -- # return 1 00:04:16.298 17:07:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:16.298 1+0 records in 00:04:16.298 1+0 records out 00:04:16.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460035 s, 228 MB/s 00:04:16.298 17:07:46 -- spdk/autotest.sh@118 -- # sync 00:04:16.298 17:07:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.298 17:07:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.298 17:07:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.203 17:07:48 -- spdk/autotest.sh@124 -- # uname -s 00:04:18.203 17:07:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:18.203 17:07:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.203 17:07:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.203 17:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.203 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:18.203 ************************************ 00:04:18.203 START TEST setup.sh 00:04:18.203 ************************************ 00:04:18.203 17:07:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.462 * Looking for test storage... 00:04:18.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.462 17:07:48 -- setup/test-setup.sh@10 -- # uname -s 00:04:18.462 17:07:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:18.462 17:07:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.462 17:07:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.462 17:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.462 17:07:48 -- common/autotest_common.sh@10 -- # set +x 00:04:18.462 ************************************ 00:04:18.462 START TEST acl 00:04:18.462 ************************************ 00:04:18.462 17:07:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.462 * Looking for test storage... 00:04:18.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.462 17:07:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:18.462 17:07:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:18.462 17:07:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:18.462 17:07:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:18.462 17:07:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.462 17:07:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:18.462 17:07:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:18.462 17:07:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.462 17:07:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:18.462 17:07:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:18.462 17:07:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.462 17:07:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:18.462 17:07:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:18.462 17:07:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:18.462 17:07:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:18.462 17:07:48 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:18.462 17:07:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:18.462 17:07:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:18.462 17:07:48 -- setup/acl.sh@12 -- # devs=() 00:04:18.462 17:07:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:18.462 17:07:48 -- setup/acl.sh@13 -- # drivers=() 00:04:18.462 17:07:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:18.462 17:07:48 -- setup/acl.sh@51 -- # setup reset 00:04:18.462 17:07:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.462 17:07:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.398 17:07:49 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:19.398 17:07:49 -- setup/acl.sh@16 -- # local dev driver 00:04:19.398 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.398 17:07:49 -- setup/acl.sh@15 -- # setup output status 00:04:19.398 17:07:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.398 17:07:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # continue 00:04:19.966 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.966 Hugepages 00:04:19.966 node hugesize free / total 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # continue 00:04:19.966 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.966 00:04:19.966 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # continue 00:04:19.966 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.966 17:07:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:19.966 17:07:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:19.966 17:07:49 -- setup/acl.sh@20 -- # continue 00:04:19.966 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.225 17:07:49 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:20.225 17:07:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.225 17:07:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:20.225 17:07:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.225 17:07:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.225 17:07:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.225 17:07:50 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:20.225 17:07:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.225 17:07:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:20.225 17:07:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.225 17:07:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.225 17:07:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.225 17:07:50 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:20.225 17:07:50 -- setup/acl.sh@54 -- # run_test denied denied 00:04:20.225 17:07:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.225 17:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.225 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:20.225 ************************************ 00:04:20.225 START TEST denied 00:04:20.225 ************************************ 00:04:20.225 17:07:50 -- common/autotest_common.sh@1111 -- # denied 00:04:20.225 17:07:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:20.225 17:07:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:20.225 17:07:50 -- setup/acl.sh@38 -- # setup output config 00:04:20.225 17:07:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.225 17:07:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.161 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:21.161 17:07:50 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:21.161 17:07:50 -- setup/acl.sh@28 -- # local dev driver 00:04:21.161 17:07:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:21.161 17:07:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:21.161 17:07:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:21.161 17:07:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:21.161 17:07:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:21.161 17:07:50 -- setup/acl.sh@41 -- # setup reset 00:04:21.161 17:07:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.161 17:07:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.761 00:04:21.761 real 0m1.435s 00:04:21.761 user 0m0.555s 00:04:21.761 sys 0m0.817s 00:04:21.761 17:07:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:21.761 ************************************ 00:04:21.761 END TEST denied 00:04:21.761 17:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:21.761 ************************************ 00:04:21.761 17:07:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:21.761 17:07:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.761 17:07:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.761 17:07:51 -- common/autotest_common.sh@10 -- # set +x 00:04:21.761 ************************************ 00:04:21.761 START TEST allowed 00:04:21.761 ************************************ 00:04:21.761 17:07:51 -- common/autotest_common.sh@1111 -- # allowed 00:04:21.761 17:07:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:21.761 17:07:51 -- setup/acl.sh@45 -- # setup output config 00:04:21.761 17:07:51 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:21.761 17:07:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.761 17:07:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.697 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.697 17:07:52 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:22.697 17:07:52 -- setup/acl.sh@28 -- # local dev driver 00:04:22.697 17:07:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.697 17:07:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:22.697 17:07:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:22.697 17:07:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.697 17:07:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.697 17:07:52 -- setup/acl.sh@48 -- # setup reset 00:04:22.697 17:07:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.698 17:07:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.266 00:04:23.266 real 0m1.515s 00:04:23.266 user 0m0.693s 00:04:23.266 sys 0m0.801s 00:04:23.266 17:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.266 ************************************ 00:04:23.266 END TEST allowed 00:04:23.266 ************************************ 00:04:23.266 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 00:04:23.266 real 0m4.898s 00:04:23.266 user 0m2.175s 00:04:23.266 sys 0m2.614s 00:04:23.266 17:07:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.266 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.266 ************************************ 00:04:23.266 END TEST acl 00:04:23.266 ************************************ 00:04:23.526 17:07:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:23.526 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.526 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.526 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.526 ************************************ 00:04:23.526 START TEST hugepages 00:04:23.526 ************************************ 00:04:23.526 17:07:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:23.526 * Looking for test storage... 00:04:23.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:23.526 17:07:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:23.526 17:07:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:23.526 17:07:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:23.526 17:07:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:23.526 17:07:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:23.526 17:07:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:23.526 17:07:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:23.526 17:07:53 -- setup/common.sh@18 -- # local node= 00:04:23.526 17:07:53 -- setup/common.sh@19 -- # local var val 00:04:23.526 17:07:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.526 17:07:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.526 17:07:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.526 17:07:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.526 17:07:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.526 17:07:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.526 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.526 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.526 17:07:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5441244 kB' 'MemAvailable: 7407216 kB' 'Buffers: 2436 kB' 'Cached: 2175372 kB' 'SwapCached: 0 kB' 'Active: 876060 kB' 'Inactive: 1408164 kB' 'Active(anon): 116904 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1440 kB' 'Writeback: 0 kB' 'AnonPages: 107860 kB' 'Mapped: 48936 kB' 'Shmem: 10488 kB' 'KReclaimable: 71180 kB' 'Slab: 146336 kB' 'SReclaimable: 71180 kB' 'SUnreclaim: 75156 kB' 'KernelStack: 6444 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 339612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.527 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.527 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # continue 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.528 17:07:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.528 17:07:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.528 17:07:53 -- setup/common.sh@33 -- # echo 2048 00:04:23.528 17:07:53 -- setup/common.sh@33 -- # return 0 00:04:23.528 17:07:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:23.528 17:07:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:23.528 17:07:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:23.528 17:07:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:23.528 17:07:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:23.528 17:07:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:23.528 17:07:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:23.528 17:07:53 -- setup/hugepages.sh@207 -- # get_nodes 00:04:23.528 17:07:53 -- setup/hugepages.sh@27 -- # local node 00:04:23.528 17:07:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.528 17:07:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:23.528 17:07:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.528 17:07:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.528 17:07:53 -- setup/hugepages.sh@208 -- # clear_hp 00:04:23.528 17:07:53 -- setup/hugepages.sh@37 -- # local node hp 00:04:23.528 17:07:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.528 17:07:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.528 17:07:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:23.528 17:07:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.528 17:07:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:23.528 17:07:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:23.528 17:07:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:23.528 17:07:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:23.528 17:07:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.528 17:07:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.528 17:07:53 -- common/autotest_common.sh@10 -- # set +x 00:04:23.787 ************************************ 00:04:23.787 START TEST default_setup 00:04:23.787 ************************************ 00:04:23.787 17:07:53 -- common/autotest_common.sh@1111 -- # default_setup 00:04:23.787 17:07:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:23.787 17:07:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.787 17:07:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.787 17:07:53 -- setup/hugepages.sh@51 -- # shift 00:04:23.787 17:07:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.787 17:07:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.787 17:07:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.787 17:07:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.787 17:07:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.787 17:07:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.787 17:07:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.787 17:07:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.787 17:07:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.787 17:07:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.787 17:07:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.787 17:07:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.787 17:07:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.787 17:07:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:23.787 17:07:53 -- setup/hugepages.sh@73 -- # return 0 00:04:23.787 17:07:53 -- setup/hugepages.sh@137 -- # setup output 00:04:23.787 17:07:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.788 17:07:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.355 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.619 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.620 17:07:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:24.620 17:07:54 -- setup/hugepages.sh@89 -- # local node 00:04:24.620 17:07:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.620 17:07:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.620 17:07:54 -- setup/hugepages.sh@92 -- # local surp 00:04:24.620 17:07:54 -- setup/hugepages.sh@93 -- # local resv 00:04:24.620 17:07:54 -- setup/hugepages.sh@94 -- # local anon 00:04:24.620 17:07:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.620 17:07:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.620 17:07:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.620 17:07:54 -- setup/common.sh@18 -- # local node= 00:04:24.620 17:07:54 -- setup/common.sh@19 -- # local var val 00:04:24.620 17:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.620 17:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.620 17:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.620 17:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.620 17:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.620 17:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7530104 kB' 'MemAvailable: 9495904 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892280 kB' 'Inactive: 1408168 kB' 'Active(anon): 133124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 124240 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6416 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.620 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.620 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.621 17:07:54 -- setup/common.sh@33 -- # echo 0 00:04:24.621 17:07:54 -- setup/common.sh@33 -- # return 0 00:04:24.621 17:07:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.621 17:07:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.621 17:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.621 17:07:54 -- setup/common.sh@18 -- # local node= 00:04:24.621 17:07:54 -- setup/common.sh@19 -- # local var val 00:04:24.621 17:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.621 17:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.621 17:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.621 17:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.621 17:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.621 17:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7530360 kB' 'MemAvailable: 9496160 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892104 kB' 'Inactive: 1408168 kB' 'Active(anon): 132948 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 124072 kB' 'Mapped: 48972 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6432 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.621 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.621 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.622 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.622 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.623 17:07:54 -- setup/common.sh@33 -- # echo 0 00:04:24.623 17:07:54 -- setup/common.sh@33 -- # return 0 00:04:24.623 17:07:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.623 17:07:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.623 17:07:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.623 17:07:54 -- setup/common.sh@18 -- # local node= 00:04:24.623 17:07:54 -- setup/common.sh@19 -- # local var val 00:04:24.623 17:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.623 17:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.623 17:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.623 17:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.623 17:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.623 17:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7530360 kB' 'MemAvailable: 9496172 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892000 kB' 'Inactive: 1408180 kB' 'Active(anon): 132844 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 124220 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6448 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.623 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.623 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.624 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.624 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.624 17:07:54 -- setup/common.sh@33 -- # echo 0 00:04:24.624 17:07:54 -- setup/common.sh@33 -- # return 0 00:04:24.624 17:07:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.624 nr_hugepages=1024 00:04:24.624 17:07:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.624 resv_hugepages=0 00:04:24.624 17:07:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.624 surplus_hugepages=0 00:04:24.624 17:07:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.624 anon_hugepages=0 00:04:24.624 17:07:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.624 17:07:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.624 17:07:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.624 17:07:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.624 17:07:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.624 17:07:54 -- setup/common.sh@18 -- # local node= 00:04:24.624 17:07:54 -- setup/common.sh@19 -- # local var val 00:04:24.624 17:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.624 17:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.624 17:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.624 17:07:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.625 17:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.625 17:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7530360 kB' 'MemAvailable: 9496172 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 891792 kB' 'Inactive: 1408180 kB' 'Active(anon): 132636 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 123780 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6400 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.625 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.625 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.626 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.626 17:07:54 -- setup/common.sh@33 -- # echo 1024 00:04:24.626 17:07:54 -- setup/common.sh@33 -- # return 0 00:04:24.626 17:07:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.626 17:07:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.626 17:07:54 -- setup/hugepages.sh@27 -- # local node 00:04:24.626 17:07:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.626 17:07:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.626 17:07:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.626 17:07:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.626 17:07:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.626 17:07:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.626 17:07:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.626 17:07:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.626 17:07:54 -- setup/common.sh@18 -- # local node=0 00:04:24.626 17:07:54 -- setup/common.sh@19 -- # local var val 00:04:24.626 17:07:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.626 17:07:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.626 17:07:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.626 17:07:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.626 17:07:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.626 17:07:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.626 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7530360 kB' 'MemUsed: 4711620 kB' 'SwapCached: 0 kB' 'Active: 892052 kB' 'Inactive: 1408180 kB' 'Active(anon): 132896 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408180 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 2177804 kB' 'Mapped: 48932 kB' 'AnonPages: 124040 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.627 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.627 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # continue 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.628 17:07:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.628 17:07:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.628 17:07:54 -- setup/common.sh@33 -- # echo 0 00:04:24.628 17:07:54 -- setup/common.sh@33 -- # return 0 00:04:24.628 17:07:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.628 17:07:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.628 17:07:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.628 17:07:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.628 node0=1024 expecting 1024 00:04:24.628 17:07:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.628 17:07:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.628 00:04:24.628 real 0m0.997s 00:04:24.628 user 0m0.474s 00:04:24.628 sys 0m0.469s 00:04:24.628 17:07:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.628 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:24.628 ************************************ 00:04:24.628 END TEST default_setup 00:04:24.628 ************************************ 00:04:24.628 17:07:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:24.628 17:07:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.628 17:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.628 17:07:54 -- common/autotest_common.sh@10 -- # set +x 00:04:24.887 ************************************ 00:04:24.887 START TEST per_node_1G_alloc 00:04:24.887 ************************************ 00:04:24.887 17:07:54 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:24.887 17:07:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:24.887 17:07:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:24.887 17:07:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.887 17:07:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.887 17:07:54 -- setup/hugepages.sh@51 -- # shift 00:04:24.887 17:07:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.887 17:07:54 -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.887 17:07:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.887 17:07:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.887 17:07:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.887 17:07:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.887 17:07:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.887 17:07:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.887 17:07:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.887 17:07:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.887 17:07:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.887 17:07:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.887 17:07:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.887 17:07:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:24.887 17:07:54 -- setup/hugepages.sh@73 -- # return 0 00:04:24.887 17:07:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:24.887 17:07:54 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:24.887 17:07:54 -- setup/hugepages.sh@146 -- # setup output 00:04:24.887 17:07:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.887 17:07:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.148 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.148 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.148 17:07:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:25.148 17:07:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:25.148 17:07:55 -- setup/hugepages.sh@89 -- # local node 00:04:25.148 17:07:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.148 17:07:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.148 17:07:55 -- setup/hugepages.sh@92 -- # local surp 00:04:25.148 17:07:55 -- setup/hugepages.sh@93 -- # local resv 00:04:25.148 17:07:55 -- setup/hugepages.sh@94 -- # local anon 00:04:25.148 17:07:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.148 17:07:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.148 17:07:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.148 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.148 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.148 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.148 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.148 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.148 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.148 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.148 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8580112 kB' 'MemAvailable: 10545928 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892556 kB' 'Inactive: 1408184 kB' 'Active(anon): 133400 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 696 kB' 'Writeback: 0 kB' 'AnonPages: 124508 kB' 'Mapped: 49236 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146028 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75200 kB' 'KernelStack: 6368 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.148 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.148 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.149 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.149 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.149 17:07:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.149 17:07:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.149 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.149 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.149 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.149 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.149 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.149 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.149 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.149 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.149 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8580112 kB' 'MemAvailable: 10545928 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892184 kB' 'Inactive: 1408184 kB' 'Active(anon): 133028 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 696 kB' 'Writeback: 0 kB' 'AnonPages: 124136 kB' 'Mapped: 49144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146024 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75196 kB' 'KernelStack: 6412 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.149 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.149 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.150 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.150 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.151 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.151 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.151 17:07:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.151 17:07:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.151 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.151 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.151 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.151 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.151 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.151 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.151 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.151 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.151 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8580112 kB' 'MemAvailable: 10545928 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 892000 kB' 'Inactive: 1408184 kB' 'Active(anon): 132844 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 123940 kB' 'Mapped: 49144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146024 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75196 kB' 'KernelStack: 6428 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.151 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.151 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.152 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.152 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.413 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.413 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.413 17:07:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.413 nr_hugepages=512 00:04:25.413 17:07:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:25.413 resv_hugepages=0 00:04:25.413 17:07:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.413 surplus_hugepages=0 00:04:25.413 17:07:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.413 anon_hugepages=0 00:04:25.413 17:07:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.413 17:07:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.413 17:07:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:25.413 17:07:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.413 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.413 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.413 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.413 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.413 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.413 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.413 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.413 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.413 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8580112 kB' 'MemAvailable: 10545928 kB' 'Buffers: 2436 kB' 'Cached: 2175368 kB' 'SwapCached: 0 kB' 'Active: 891992 kB' 'Inactive: 1408184 kB' 'Active(anon): 132836 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 124192 kB' 'Mapped: 49144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146024 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75196 kB' 'KernelStack: 6412 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.413 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.413 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.414 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.414 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.415 17:07:55 -- setup/common.sh@33 -- # echo 512 00:04:25.415 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.415 17:07:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:25.415 17:07:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.415 17:07:55 -- setup/hugepages.sh@27 -- # local node 00:04:25.415 17:07:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.415 17:07:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.415 17:07:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.415 17:07:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.415 17:07:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.415 17:07:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.415 17:07:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.415 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.415 17:07:55 -- setup/common.sh@18 -- # local node=0 00:04:25.415 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.415 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.415 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.415 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.415 17:07:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.415 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.415 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8580112 kB' 'MemUsed: 3661868 kB' 'SwapCached: 0 kB' 'Active: 892000 kB' 'Inactive: 1408184 kB' 'Active(anon): 132844 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408184 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 2177804 kB' 'Mapped: 49144 kB' 'AnonPages: 124176 kB' 'Shmem: 10464 kB' 'KernelStack: 6380 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70828 kB' 'Slab: 146024 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.415 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.415 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.416 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.416 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.416 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.416 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.416 17:07:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.416 17:07:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.416 17:07:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.416 node0=512 expecting 512 00:04:25.416 17:07:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.416 17:07:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.416 00:04:25.416 real 0m0.522s 00:04:25.416 user 0m0.269s 00:04:25.416 sys 0m0.289s 00:04:25.416 17:07:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.416 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.416 ************************************ 00:04:25.416 END TEST per_node_1G_alloc 00:04:25.416 ************************************ 00:04:25.416 17:07:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:25.416 17:07:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.416 17:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.416 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.416 ************************************ 00:04:25.416 START TEST even_2G_alloc 00:04:25.416 ************************************ 00:04:25.416 17:07:55 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:25.416 17:07:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:25.416 17:07:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.416 17:07:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.416 17:07:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.416 17:07:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.416 17:07:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.416 17:07:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.416 17:07:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.416 17:07:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.416 17:07:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.416 17:07:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:25.416 17:07:55 -- setup/hugepages.sh@83 -- # : 0 00:04:25.416 17:07:55 -- setup/hugepages.sh@84 -- # : 0 00:04:25.416 17:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.416 17:07:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:25.416 17:07:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:25.416 17:07:55 -- setup/hugepages.sh@153 -- # setup output 00:04:25.416 17:07:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.416 17:07:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.939 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.939 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.939 17:07:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:25.939 17:07:55 -- setup/hugepages.sh@89 -- # local node 00:04:25.939 17:07:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.939 17:07:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.939 17:07:55 -- setup/hugepages.sh@92 -- # local surp 00:04:25.939 17:07:55 -- setup/hugepages.sh@93 -- # local resv 00:04:25.939 17:07:55 -- setup/hugepages.sh@94 -- # local anon 00:04:25.939 17:07:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.939 17:07:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.939 17:07:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.939 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.939 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.939 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.939 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.939 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.939 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.939 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.939 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.939 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7531368 kB' 'MemAvailable: 9497188 kB' 'Buffers: 2436 kB' 'Cached: 2175372 kB' 'SwapCached: 0 kB' 'Active: 892404 kB' 'Inactive: 1408188 kB' 'Active(anon): 133248 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 124416 kB' 'Mapped: 49012 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146016 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75188 kB' 'KernelStack: 6404 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.939 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.939 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.940 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.940 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.940 17:07:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.940 17:07:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.940 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.940 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.940 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.940 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.940 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.940 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.940 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.940 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.940 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7531116 kB' 'MemAvailable: 9496936 kB' 'Buffers: 2436 kB' 'Cached: 2175372 kB' 'SwapCached: 0 kB' 'Active: 892264 kB' 'Inactive: 1408188 kB' 'Active(anon): 133108 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 124236 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146016 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75188 kB' 'KernelStack: 6448 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.940 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.940 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.941 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.941 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.941 17:07:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.941 17:07:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.941 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.941 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.941 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.941 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.941 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.941 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.941 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.941 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.941 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7531116 kB' 'MemAvailable: 9496936 kB' 'Buffers: 2436 kB' 'Cached: 2175372 kB' 'SwapCached: 0 kB' 'Active: 892136 kB' 'Inactive: 1408188 kB' 'Active(anon): 132980 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 124368 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146016 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75188 kB' 'KernelStack: 6416 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.941 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.941 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.941 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.941 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.942 17:07:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.942 nr_hugepages=1024 00:04:25.942 17:07:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.942 resv_hugepages=0 00:04:25.942 17:07:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.942 surplus_hugepages=0 00:04:25.942 17:07:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.942 anon_hugepages=0 00:04:25.942 17:07:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.942 17:07:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.942 17:07:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.942 17:07:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.942 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.942 17:07:55 -- setup/common.sh@18 -- # local node= 00:04:25.942 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.942 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.942 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.942 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.942 17:07:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.942 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.942 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7531116 kB' 'MemAvailable: 9496936 kB' 'Buffers: 2436 kB' 'Cached: 2175372 kB' 'SwapCached: 0 kB' 'Active: 891880 kB' 'Inactive: 1408188 kB' 'Active(anon): 132724 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 123864 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.942 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.942 17:07:55 -- setup/common.sh@33 -- # echo 1024 00:04:25.942 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.942 17:07:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.942 17:07:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.942 17:07:55 -- setup/hugepages.sh@27 -- # local node 00:04:25.942 17:07:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.942 17:07:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.942 17:07:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.942 17:07:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.942 17:07:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.942 17:07:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.942 17:07:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.942 17:07:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.942 17:07:55 -- setup/common.sh@18 -- # local node=0 00:04:25.942 17:07:55 -- setup/common.sh@19 -- # local var val 00:04:25.942 17:07:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.942 17:07:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.942 17:07:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.942 17:07:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.942 17:07:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.942 17:07:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.942 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7531116 kB' 'MemUsed: 4710864 kB' 'SwapCached: 0 kB' 'Active: 891872 kB' 'Inactive: 1408188 kB' 'Active(anon): 132716 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408188 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'FilePages: 2177808 kB' 'Mapped: 48956 kB' 'AnonPages: 124132 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70828 kB' 'Slab: 146012 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # continue 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.943 17:07:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.943 17:07:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.943 17:07:55 -- setup/common.sh@33 -- # echo 0 00:04:25.943 17:07:55 -- setup/common.sh@33 -- # return 0 00:04:25.943 17:07:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.943 17:07:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.943 17:07:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.943 17:07:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.943 node0=1024 expecting 1024 00:04:25.943 17:07:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.943 17:07:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.943 00:04:25.943 real 0m0.553s 00:04:25.943 user 0m0.283s 00:04:25.943 sys 0m0.303s 00:04:25.943 17:07:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.943 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.943 ************************************ 00:04:25.943 END TEST even_2G_alloc 00:04:25.943 ************************************ 00:04:25.943 17:07:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:25.943 17:07:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.943 17:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.943 17:07:55 -- common/autotest_common.sh@10 -- # set +x 00:04:26.201 ************************************ 00:04:26.201 START TEST odd_alloc 00:04:26.201 ************************************ 00:04:26.201 17:07:55 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:26.201 17:07:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:26.201 17:07:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:26.201 17:07:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.201 17:07:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.201 17:07:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:26.201 17:07:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.201 17:07:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.201 17:07:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.201 17:07:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:26.201 17:07:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.201 17:07:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.201 17:07:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.201 17:07:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.201 17:07:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.201 17:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.201 17:07:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:26.201 17:07:55 -- setup/hugepages.sh@83 -- # : 0 00:04:26.202 17:07:55 -- setup/hugepages.sh@84 -- # : 0 00:04:26.202 17:07:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.202 17:07:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:26.202 17:07:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:26.202 17:07:55 -- setup/hugepages.sh@160 -- # setup output 00:04:26.202 17:07:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.202 17:07:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.461 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.462 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.462 17:07:56 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:26.462 17:07:56 -- setup/hugepages.sh@89 -- # local node 00:04:26.462 17:07:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.462 17:07:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.462 17:07:56 -- setup/hugepages.sh@92 -- # local surp 00:04:26.462 17:07:56 -- setup/hugepages.sh@93 -- # local resv 00:04:26.462 17:07:56 -- setup/hugepages.sh@94 -- # local anon 00:04:26.462 17:07:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.462 17:07:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.462 17:07:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.462 17:07:56 -- setup/common.sh@18 -- # local node= 00:04:26.462 17:07:56 -- setup/common.sh@19 -- # local var val 00:04:26.462 17:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.462 17:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.462 17:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.462 17:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.462 17:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.462 17:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7532484 kB' 'MemAvailable: 9498344 kB' 'Buffers: 2436 kB' 'Cached: 2175412 kB' 'SwapCached: 0 kB' 'Active: 892340 kB' 'Inactive: 1408228 kB' 'Active(anon): 133184 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1040 kB' 'Writeback: 0 kB' 'AnonPages: 124336 kB' 'Mapped: 49156 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 146000 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75172 kB' 'KernelStack: 6404 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.462 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.462 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.463 17:07:56 -- setup/common.sh@33 -- # echo 0 00:04:26.463 17:07:56 -- setup/common.sh@33 -- # return 0 00:04:26.463 17:07:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.463 17:07:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.463 17:07:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.463 17:07:56 -- setup/common.sh@18 -- # local node= 00:04:26.463 17:07:56 -- setup/common.sh@19 -- # local var val 00:04:26.463 17:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.463 17:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.463 17:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.463 17:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.463 17:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.463 17:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7532484 kB' 'MemAvailable: 9498344 kB' 'Buffers: 2436 kB' 'Cached: 2175412 kB' 'SwapCached: 0 kB' 'Active: 892124 kB' 'Inactive: 1408228 kB' 'Active(anon): 132968 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1040 kB' 'Writeback: 0 kB' 'AnonPages: 124148 kB' 'Mapped: 48968 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145996 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75168 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.463 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.463 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.725 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.725 17:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.726 17:07:56 -- setup/common.sh@33 -- # echo 0 00:04:26.726 17:07:56 -- setup/common.sh@33 -- # return 0 00:04:26.726 17:07:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.726 17:07:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.726 17:07:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.726 17:07:56 -- setup/common.sh@18 -- # local node= 00:04:26.726 17:07:56 -- setup/common.sh@19 -- # local var val 00:04:26.726 17:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.726 17:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.726 17:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.726 17:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.726 17:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.726 17:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7532484 kB' 'MemAvailable: 9498344 kB' 'Buffers: 2436 kB' 'Cached: 2175412 kB' 'SwapCached: 0 kB' 'Active: 891856 kB' 'Inactive: 1408228 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1040 kB' 'Writeback: 0 kB' 'AnonPages: 124120 kB' 'Mapped: 48968 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145996 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75168 kB' 'KernelStack: 6400 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.726 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.726 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.727 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.727 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.728 17:07:56 -- setup/common.sh@33 -- # echo 0 00:04:26.728 17:07:56 -- setup/common.sh@33 -- # return 0 00:04:26.728 17:07:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.728 nr_hugepages=1025 00:04:26.728 17:07:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:26.728 resv_hugepages=0 00:04:26.728 17:07:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.728 surplus_hugepages=0 00:04:26.728 17:07:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.728 anon_hugepages=0 00:04:26.728 17:07:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.728 17:07:56 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.728 17:07:56 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:26.728 17:07:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.728 17:07:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.728 17:07:56 -- setup/common.sh@18 -- # local node= 00:04:26.728 17:07:56 -- setup/common.sh@19 -- # local var val 00:04:26.728 17:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.728 17:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.728 17:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.728 17:07:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.728 17:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.728 17:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7532808 kB' 'MemAvailable: 9498668 kB' 'Buffers: 2436 kB' 'Cached: 2175412 kB' 'SwapCached: 0 kB' 'Active: 891868 kB' 'Inactive: 1408228 kB' 'Active(anon): 132712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1040 kB' 'Writeback: 0 kB' 'AnonPages: 123892 kB' 'Mapped: 48968 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145996 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75168 kB' 'KernelStack: 6416 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 356276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.728 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.728 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.729 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.729 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.730 17:07:56 -- setup/common.sh@33 -- # echo 1025 00:04:26.730 17:07:56 -- setup/common.sh@33 -- # return 0 00:04:26.730 17:07:56 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.730 17:07:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.730 17:07:56 -- setup/hugepages.sh@27 -- # local node 00:04:26.730 17:07:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.730 17:07:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:26.730 17:07:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.730 17:07:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.730 17:07:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.730 17:07:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.730 17:07:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.730 17:07:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.730 17:07:56 -- setup/common.sh@18 -- # local node=0 00:04:26.730 17:07:56 -- setup/common.sh@19 -- # local var val 00:04:26.730 17:07:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.730 17:07:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.730 17:07:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.730 17:07:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.730 17:07:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.730 17:07:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7532808 kB' 'MemUsed: 4709172 kB' 'SwapCached: 0 kB' 'Active: 891872 kB' 'Inactive: 1408228 kB' 'Active(anon): 132716 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1040 kB' 'Writeback: 0 kB' 'FilePages: 2177848 kB' 'Mapped: 48968 kB' 'AnonPages: 123888 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70828 kB' 'Slab: 145996 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.730 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.730 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # continue 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.731 17:07:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.731 17:07:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.731 17:07:56 -- setup/common.sh@33 -- # echo 0 00:04:26.731 17:07:56 -- setup/common.sh@33 -- # return 0 00:04:26.731 17:07:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.731 17:07:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.731 17:07:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.731 17:07:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.731 node0=1025 expecting 1025 00:04:26.731 17:07:56 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:26.731 17:07:56 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:26.731 00:04:26.731 real 0m0.611s 00:04:26.731 user 0m0.310s 00:04:26.731 sys 0m0.303s 00:04:26.731 17:07:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.731 17:07:56 -- common/autotest_common.sh@10 -- # set +x 00:04:26.731 ************************************ 00:04:26.731 END TEST odd_alloc 00:04:26.731 ************************************ 00:04:26.731 17:07:56 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:26.731 17:07:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.731 17:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.731 17:07:56 -- common/autotest_common.sh@10 -- # set +x 00:04:26.990 ************************************ 00:04:26.990 START TEST custom_alloc 00:04:26.990 ************************************ 00:04:26.990 17:07:56 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:26.990 17:07:56 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:26.990 17:07:56 -- setup/hugepages.sh@169 -- # local node 00:04:26.990 17:07:56 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:26.990 17:07:56 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:26.990 17:07:56 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:26.990 17:07:56 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:26.990 17:07:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.990 17:07:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.990 17:07:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.990 17:07:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.991 17:07:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.991 17:07:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.991 17:07:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.991 17:07:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.991 17:07:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.991 17:07:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@83 -- # : 0 00:04:26.991 17:07:56 -- setup/hugepages.sh@84 -- # : 0 00:04:26.991 17:07:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.991 17:07:56 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.991 17:07:56 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:26.991 17:07:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.991 17:07:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.991 17:07:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.991 17:07:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.991 17:07:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.991 17:07:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:26.991 17:07:56 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.991 17:07:56 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.991 17:07:56 -- setup/hugepages.sh@78 -- # return 0 00:04:26.991 17:07:56 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:26.991 17:07:56 -- setup/hugepages.sh@187 -- # setup output 00:04:26.991 17:07:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.991 17:07:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.253 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.253 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.253 17:07:57 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:27.253 17:07:57 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:27.253 17:07:57 -- setup/hugepages.sh@89 -- # local node 00:04:27.253 17:07:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.253 17:07:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.253 17:07:57 -- setup/hugepages.sh@92 -- # local surp 00:04:27.253 17:07:57 -- setup/hugepages.sh@93 -- # local resv 00:04:27.253 17:07:57 -- setup/hugepages.sh@94 -- # local anon 00:04:27.253 17:07:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.253 17:07:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.253 17:07:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.253 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:27.253 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.253 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.253 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.253 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.253 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.253 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.253 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8586480 kB' 'MemAvailable: 10552344 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 892296 kB' 'Inactive: 1408232 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1228 kB' 'Writeback: 0 kB' 'AnonPages: 124556 kB' 'Mapped: 49140 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145964 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75136 kB' 'KernelStack: 6404 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.253 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.254 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:27.254 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:27.254 17:07:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:27.254 17:07:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.254 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.254 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:27.254 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.254 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.254 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.254 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.254 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.254 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.254 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8586532 kB' 'MemAvailable: 10552396 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 892036 kB' 'Inactive: 1408232 kB' 'Active(anon): 132880 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1228 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 49140 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145964 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75136 kB' 'KernelStack: 6404 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:27.255 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:27.255 17:07:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.255 17:07:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.255 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.255 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:27.255 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.255 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.255 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.255 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.255 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.255 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.255 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8586824 kB' 'MemAvailable: 10552688 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 891816 kB' 'Inactive: 1408232 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1228 kB' 'Writeback: 0 kB' 'AnonPages: 123784 kB' 'Mapped: 49044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145960 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75132 kB' 'KernelStack: 6400 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.257 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:27.257 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:27.257 17:07:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.257 nr_hugepages=512 00:04:27.257 17:07:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.257 resv_hugepages=0 00:04:27.257 17:07:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.257 surplus_hugepages=0 00:04:27.257 17:07:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.257 anon_hugepages=0 00:04:27.257 17:07:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.257 17:07:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.257 17:07:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.257 17:07:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.257 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.257 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:27.257 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.257 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.257 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.257 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.257 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.257 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.257 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8586824 kB' 'MemAvailable: 10552688 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 892036 kB' 'Inactive: 1408232 kB' 'Active(anon): 132880 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1228 kB' 'Writeback: 0 kB' 'AnonPages: 124004 kB' 'Mapped: 49044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70828 kB' 'Slab: 145960 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 75132 kB' 'KernelStack: 6384 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.257 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.257 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.258 17:07:57 -- setup/common.sh@33 -- # echo 512 00:04:27.258 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:27.258 17:07:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.258 17:07:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.258 17:07:57 -- setup/hugepages.sh@27 -- # local node 00:04:27.258 17:07:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.258 17:07:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.258 17:07:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.258 17:07:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.258 17:07:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.258 17:07:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.258 17:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.258 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.258 17:07:57 -- setup/common.sh@18 -- # local node=0 00:04:27.258 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.258 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.258 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.258 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.258 17:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.258 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.258 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8586824 kB' 'MemUsed: 3655156 kB' 'SwapCached: 0 kB' 'Active: 891840 kB' 'Inactive: 1408232 kB' 'Active(anon): 132684 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1228 kB' 'Writeback: 0 kB' 'FilePages: 2177852 kB' 'Mapped: 48984 kB' 'AnonPages: 124080 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70812 kB' 'Slab: 145940 kB' 'SReclaimable: 70812 kB' 'SUnreclaim: 75128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.258 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.258 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.259 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.259 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.518 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:27.518 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:27.518 17:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.518 17:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.518 17:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.518 17:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.518 node0=512 expecting 512 00:04:27.518 17:07:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.518 17:07:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.518 00:04:27.518 real 0m0.530s 00:04:27.518 user 0m0.274s 00:04:27.518 sys 0m0.289s 00:04:27.518 17:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.518 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:27.518 ************************************ 00:04:27.518 END TEST custom_alloc 00:04:27.518 ************************************ 00:04:27.518 17:07:57 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:27.518 17:07:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.518 17:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.518 17:07:57 -- common/autotest_common.sh@10 -- # set +x 00:04:27.518 ************************************ 00:04:27.518 START TEST no_shrink_alloc 00:04:27.518 ************************************ 00:04:27.518 17:07:57 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:27.518 17:07:57 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:27.518 17:07:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.518 17:07:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.518 17:07:57 -- setup/hugepages.sh@51 -- # shift 00:04:27.518 17:07:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.518 17:07:57 -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.518 17:07:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.518 17:07:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.518 17:07:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.518 17:07:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.518 17:07:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.518 17:07:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.518 17:07:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.518 17:07:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.518 17:07:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.518 17:07:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.518 17:07:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.518 17:07:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.518 17:07:57 -- setup/hugepages.sh@73 -- # return 0 00:04:27.518 17:07:57 -- setup/hugepages.sh@198 -- # setup output 00:04:27.518 17:07:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.518 17:07:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.777 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.777 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.777 17:07:57 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:27.777 17:07:57 -- setup/hugepages.sh@89 -- # local node 00:04:27.777 17:07:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.777 17:07:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.777 17:07:57 -- setup/hugepages.sh@92 -- # local surp 00:04:27.777 17:07:57 -- setup/hugepages.sh@93 -- # local resv 00:04:27.777 17:07:57 -- setup/hugepages.sh@94 -- # local anon 00:04:27.777 17:07:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.777 17:07:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.777 17:07:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.777 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:27.777 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:27.777 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.777 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.777 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.777 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.777 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.777 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7550880 kB' 'MemAvailable: 9516740 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887796 kB' 'Inactive: 1408232 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1380 kB' 'Writeback: 0 kB' 'AnonPages: 120008 kB' 'Mapped: 48380 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145916 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6244 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.777 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.777 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.778 17:07:57 -- setup/common.sh@32 -- # continue 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.778 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.039 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.039 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.040 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:28.040 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:28.040 17:07:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.040 17:07:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.040 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.040 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:28.040 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:28.040 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.040 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.040 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.040 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.040 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.040 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7550880 kB' 'MemAvailable: 9516740 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887476 kB' 'Inactive: 1408232 kB' 'Active(anon): 128320 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1380 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 48372 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145916 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6244 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.040 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.040 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.041 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:28.041 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:28.041 17:07:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.041 17:07:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.041 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.041 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:28.041 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:28.041 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.041 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.041 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.041 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.041 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.041 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7551920 kB' 'MemAvailable: 9517780 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887368 kB' 'Inactive: 1408232 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1380 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 48252 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145916 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6304 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.041 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.041 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.042 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.042 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.043 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:28.043 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:28.043 17:07:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.043 nr_hugepages=1024 00:04:28.043 17:07:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.043 resv_hugepages=0 00:04:28.043 17:07:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.043 surplus_hugepages=0 00:04:28.043 17:07:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.043 anon_hugepages=0 00:04:28.043 17:07:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.043 17:07:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.043 17:07:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.043 17:07:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.043 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.043 17:07:57 -- setup/common.sh@18 -- # local node= 00:04:28.043 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:28.043 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.043 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.043 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.043 17:07:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.043 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.043 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7552120 kB' 'MemAvailable: 9517980 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887328 kB' 'Inactive: 1408232 kB' 'Active(anon): 128172 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1380 kB' 'Writeback: 0 kB' 'AnonPages: 119596 kB' 'Mapped: 48252 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145916 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75092 kB' 'KernelStack: 6288 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.043 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.043 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.044 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.044 17:07:57 -- setup/common.sh@33 -- # echo 1024 00:04:28.044 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:28.044 17:07:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.044 17:07:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.044 17:07:57 -- setup/hugepages.sh@27 -- # local node 00:04:28.044 17:07:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.044 17:07:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.044 17:07:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.044 17:07:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.044 17:07:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.044 17:07:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.044 17:07:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.044 17:07:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.044 17:07:57 -- setup/common.sh@18 -- # local node=0 00:04:28.044 17:07:57 -- setup/common.sh@19 -- # local var val 00:04:28.044 17:07:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.044 17:07:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.044 17:07:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.044 17:07:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.044 17:07:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.044 17:07:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.044 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7552120 kB' 'MemUsed: 4689860 kB' 'SwapCached: 0 kB' 'Active: 887328 kB' 'Inactive: 1408232 kB' 'Active(anon): 128172 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1380 kB' 'Writeback: 0 kB' 'FilePages: 2177852 kB' 'Mapped: 48252 kB' 'AnonPages: 119564 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70824 kB' 'Slab: 145912 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # continue 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.045 17:07:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.045 17:07:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.045 17:07:57 -- setup/common.sh@33 -- # echo 0 00:04:28.045 17:07:57 -- setup/common.sh@33 -- # return 0 00:04:28.045 17:07:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.045 17:07:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.045 17:07:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.045 17:07:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.046 node0=1024 expecting 1024 00:04:28.046 17:07:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.046 17:07:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.046 17:07:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:28.046 17:07:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:28.046 17:07:57 -- setup/hugepages.sh@202 -- # setup output 00:04:28.046 17:07:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.046 17:07:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.305 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.305 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.305 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:28.305 17:07:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:28.305 17:07:58 -- setup/hugepages.sh@89 -- # local node 00:04:28.305 17:07:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.305 17:07:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.305 17:07:58 -- setup/hugepages.sh@92 -- # local surp 00:04:28.305 17:07:58 -- setup/hugepages.sh@93 -- # local resv 00:04:28.305 17:07:58 -- setup/hugepages.sh@94 -- # local anon 00:04:28.305 17:07:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.305 17:07:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.305 17:07:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.305 17:07:58 -- setup/common.sh@18 -- # local node= 00:04:28.305 17:07:58 -- setup/common.sh@19 -- # local var val 00:04:28.305 17:07:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.305 17:07:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.305 17:07:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.305 17:07:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.305 17:07:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.305 17:07:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7550176 kB' 'MemAvailable: 9516036 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 888368 kB' 'Inactive: 1408232 kB' 'Active(anon): 129212 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1388 kB' 'Writeback: 0 kB' 'AnonPages: 120336 kB' 'Mapped: 48456 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145904 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75080 kB' 'KernelStack: 6372 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.305 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.305 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.306 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.306 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.568 17:07:58 -- setup/common.sh@33 -- # echo 0 00:04:28.568 17:07:58 -- setup/common.sh@33 -- # return 0 00:04:28.568 17:07:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.568 17:07:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.568 17:07:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.568 17:07:58 -- setup/common.sh@18 -- # local node= 00:04:28.568 17:07:58 -- setup/common.sh@19 -- # local var val 00:04:28.568 17:07:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.568 17:07:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.568 17:07:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.568 17:07:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.568 17:07:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.568 17:07:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7550176 kB' 'MemAvailable: 9516036 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887868 kB' 'Inactive: 1408232 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1388 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 48456 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145904 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75080 kB' 'KernelStack: 6292 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.568 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.568 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.569 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.569 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.570 17:07:58 -- setup/common.sh@33 -- # echo 0 00:04:28.570 17:07:58 -- setup/common.sh@33 -- # return 0 00:04:28.570 17:07:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.570 17:07:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.570 17:07:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.570 17:07:58 -- setup/common.sh@18 -- # local node= 00:04:28.570 17:07:58 -- setup/common.sh@19 -- # local var val 00:04:28.570 17:07:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.570 17:07:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.570 17:07:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.570 17:07:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.570 17:07:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.570 17:07:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549924 kB' 'MemAvailable: 9515784 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887468 kB' 'Inactive: 1408232 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1388 kB' 'Writeback: 0 kB' 'AnonPages: 119416 kB' 'Mapped: 48268 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145904 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75080 kB' 'KernelStack: 6320 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.570 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.570 17:07:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.571 17:07:58 -- setup/common.sh@33 -- # echo 0 00:04:28.571 17:07:58 -- setup/common.sh@33 -- # return 0 00:04:28.571 17:07:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.571 nr_hugepages=1024 00:04:28.571 17:07:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.571 resv_hugepages=0 00:04:28.571 17:07:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.571 surplus_hugepages=0 00:04:28.571 17:07:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.571 anon_hugepages=0 00:04:28.571 17:07:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.571 17:07:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.571 17:07:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.571 17:07:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.571 17:07:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.571 17:07:58 -- setup/common.sh@18 -- # local node= 00:04:28.571 17:07:58 -- setup/common.sh@19 -- # local var val 00:04:28.571 17:07:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.571 17:07:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.571 17:07:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.571 17:07:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.571 17:07:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.571 17:07:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549924 kB' 'MemAvailable: 9515784 kB' 'Buffers: 2436 kB' 'Cached: 2175416 kB' 'SwapCached: 0 kB' 'Active: 887712 kB' 'Inactive: 1408232 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1388 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 48528 kB' 'Shmem: 10464 kB' 'KReclaimable: 70824 kB' 'Slab: 145904 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75080 kB' 'KernelStack: 6304 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.571 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.571 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.572 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.572 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.573 17:07:58 -- setup/common.sh@33 -- # echo 1024 00:04:28.573 17:07:58 -- setup/common.sh@33 -- # return 0 00:04:28.573 17:07:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.573 17:07:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.573 17:07:58 -- setup/hugepages.sh@27 -- # local node 00:04:28.573 17:07:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.573 17:07:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.573 17:07:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.573 17:07:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.573 17:07:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.573 17:07:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.573 17:07:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.573 17:07:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.573 17:07:58 -- setup/common.sh@18 -- # local node=0 00:04:28.573 17:07:58 -- setup/common.sh@19 -- # local var val 00:04:28.573 17:07:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.573 17:07:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.573 17:07:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.573 17:07:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.573 17:07:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.573 17:07:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7549672 kB' 'MemUsed: 4692308 kB' 'SwapCached: 0 kB' 'Active: 887616 kB' 'Inactive: 1408232 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 759156 kB' 'Inactive(file): 1408232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1388 kB' 'Writeback: 0 kB' 'FilePages: 2177852 kB' 'Mapped: 48260 kB' 'AnonPages: 119592 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70824 kB' 'Slab: 145900 kB' 'SReclaimable: 70824 kB' 'SUnreclaim: 75076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.573 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.573 17:07:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # continue 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.574 17:07:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.574 17:07:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.574 17:07:58 -- setup/common.sh@33 -- # echo 0 00:04:28.574 17:07:58 -- setup/common.sh@33 -- # return 0 00:04:28.574 17:07:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.574 17:07:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.574 17:07:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.574 17:07:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.574 node0=1024 expecting 1024 00:04:28.574 17:07:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.574 17:07:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.574 00:04:28.574 real 0m1.041s 00:04:28.574 user 0m0.531s 00:04:28.574 sys 0m0.576s 00:04:28.574 17:07:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.574 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:04:28.574 ************************************ 00:04:28.574 END TEST no_shrink_alloc 00:04:28.574 ************************************ 00:04:28.574 17:07:58 -- setup/hugepages.sh@217 -- # clear_hp 00:04:28.574 17:07:58 -- setup/hugepages.sh@37 -- # local node hp 00:04:28.574 17:07:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.574 17:07:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.574 17:07:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.574 17:07:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.574 17:07:58 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.574 17:07:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:28.574 17:07:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:28.574 00:04:28.574 real 0m5.100s 00:04:28.574 user 0m2.454s 00:04:28.574 sys 0m2.667s 00:04:28.574 17:07:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.574 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:04:28.574 ************************************ 00:04:28.574 END TEST hugepages 00:04:28.575 ************************************ 00:04:28.575 17:07:58 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:28.575 17:07:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.575 17:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.575 17:07:58 -- common/autotest_common.sh@10 -- # set +x 00:04:28.834 ************************************ 00:04:28.834 START TEST driver 00:04:28.834 ************************************ 00:04:28.834 17:07:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:28.834 * Looking for test storage... 00:04:28.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.834 17:07:58 -- setup/driver.sh@68 -- # setup reset 00:04:28.834 17:07:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.834 17:07:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.403 17:07:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.403 17:07:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.403 17:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.403 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.403 ************************************ 00:04:29.403 START TEST guess_driver 00:04:29.403 ************************************ 00:04:29.403 17:07:59 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:29.403 17:07:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.403 17:07:59 -- setup/driver.sh@47 -- # local fail=0 00:04:29.403 17:07:59 -- setup/driver.sh@49 -- # pick_driver 00:04:29.403 17:07:59 -- setup/driver.sh@36 -- # vfio 00:04:29.403 17:07:59 -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.403 17:07:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.403 17:07:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.403 17:07:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.403 17:07:59 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:29.403 17:07:59 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:29.403 17:07:59 -- setup/driver.sh@32 -- # return 1 00:04:29.403 17:07:59 -- setup/driver.sh@38 -- # uio 00:04:29.403 17:07:59 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:29.403 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:29.403 17:07:59 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.403 17:07:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:29.403 Looking for driver=uio_pci_generic 00:04:29.403 17:07:59 -- setup/driver.sh@45 -- # setup output config 00:04:29.404 17:07:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.404 17:07:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.404 17:07:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.971 17:07:59 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:29.971 17:07:59 -- setup/driver.sh@58 -- # continue 00:04:29.971 17:07:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.229 17:08:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.229 17:08:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:30.229 17:08:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.229 17:08:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.229 17:08:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:30.229 17:08:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.229 17:08:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:30.229 17:08:00 -- setup/driver.sh@65 -- # setup reset 00:04:30.229 17:08:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.229 17:08:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.796 00:04:30.796 real 0m1.398s 00:04:30.796 user 0m0.528s 00:04:30.796 sys 0m0.857s 00:04:30.796 17:08:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.796 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.796 ************************************ 00:04:30.796 END TEST guess_driver 00:04:30.796 ************************************ 00:04:30.796 00:04:30.796 real 0m2.165s 00:04:30.796 user 0m0.776s 00:04:30.796 sys 0m1.413s 00:04:30.796 17:08:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.796 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.796 ************************************ 00:04:30.796 END TEST driver 00:04:30.796 ************************************ 00:04:30.796 17:08:00 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:30.796 17:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.796 17:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.796 17:08:00 -- common/autotest_common.sh@10 -- # set +x 00:04:31.055 ************************************ 00:04:31.055 START TEST devices 00:04:31.055 ************************************ 00:04:31.055 17:08:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:31.055 * Looking for test storage... 00:04:31.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:31.055 17:08:00 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:31.055 17:08:00 -- setup/devices.sh@192 -- # setup reset 00:04:31.055 17:08:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.055 17:08:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.991 17:08:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:31.991 17:08:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:31.991 17:08:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:31.991 17:08:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:31.991 17:08:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:31.991 17:08:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:31.991 17:08:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:31.991 17:08:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:31.991 17:08:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:31.991 17:08:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:31.991 17:08:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:31.991 17:08:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:31.991 17:08:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:31.991 17:08:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:31.991 17:08:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:31.991 17:08:01 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:31.991 17:08:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:31.991 17:08:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:31.991 17:08:01 -- setup/devices.sh@196 -- # blocks=() 00:04:31.991 17:08:01 -- setup/devices.sh@196 -- # declare -a blocks 00:04:31.991 17:08:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:31.991 17:08:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:31.991 17:08:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:31.991 17:08:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.991 17:08:01 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:31.991 17:08:01 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:31.991 17:08:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:31.991 No valid GPT data, bailing 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # pt= 00:04:31.991 17:08:01 -- scripts/common.sh@392 -- # return 1 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:31.991 17:08:01 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:31.991 17:08:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:31.991 17:08:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.991 17:08:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.991 17:08:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.991 17:08:01 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:31.991 17:08:01 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:31.991 17:08:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:31.991 No valid GPT data, bailing 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # pt= 00:04:31.991 17:08:01 -- scripts/common.sh@392 -- # return 1 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:31.991 17:08:01 -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:31.991 17:08:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:31.991 17:08:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.991 17:08:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.991 17:08:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:31.991 17:08:01 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:31.991 17:08:01 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:31.991 17:08:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:31.991 No valid GPT data, bailing 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # pt= 00:04:31.991 17:08:01 -- scripts/common.sh@392 -- # return 1 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:31.991 17:08:01 -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:31.991 17:08:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:31.991 17:08:01 -- setup/common.sh@80 -- # echo 4294967296 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.991 17:08:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.991 17:08:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:31.991 17:08:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:31.991 17:08:01 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:31.991 17:08:01 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:31.991 17:08:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:31.991 17:08:01 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:31.991 17:08:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:31.991 No valid GPT data, bailing 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:31.991 17:08:01 -- scripts/common.sh@391 -- # pt= 00:04:31.991 17:08:01 -- scripts/common.sh@392 -- # return 1 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:31.991 17:08:01 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:31.991 17:08:01 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:31.991 17:08:01 -- setup/common.sh@80 -- # echo 5368709120 00:04:31.991 17:08:01 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:31.991 17:08:01 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.991 17:08:01 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:31.991 17:08:01 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:31.991 17:08:01 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:31.991 17:08:01 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:31.992 17:08:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.992 17:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.992 17:08:01 -- common/autotest_common.sh@10 -- # set +x 00:04:32.250 ************************************ 00:04:32.250 START TEST nvme_mount 00:04:32.250 ************************************ 00:04:32.250 17:08:01 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:32.250 17:08:01 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:32.250 17:08:01 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:32.250 17:08:01 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.250 17:08:01 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.250 17:08:01 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:32.250 17:08:01 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:32.250 17:08:01 -- setup/common.sh@40 -- # local part_no=1 00:04:32.250 17:08:01 -- setup/common.sh@41 -- # local size=1073741824 00:04:32.250 17:08:01 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:32.250 17:08:01 -- setup/common.sh@44 -- # parts=() 00:04:32.250 17:08:01 -- setup/common.sh@44 -- # local parts 00:04:32.250 17:08:01 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:32.250 17:08:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.250 17:08:01 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:32.250 17:08:01 -- setup/common.sh@46 -- # (( part++ )) 00:04:32.250 17:08:01 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:32.250 17:08:01 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:32.250 17:08:01 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:32.250 17:08:01 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:33.185 Creating new GPT entries in memory. 00:04:33.185 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:33.185 other utilities. 00:04:33.185 17:08:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:33.185 17:08:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.185 17:08:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.185 17:08:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.185 17:08:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:34.119 Creating new GPT entries in memory. 00:04:34.119 The operation has completed successfully. 00:04:34.119 17:08:04 -- setup/common.sh@57 -- # (( part++ )) 00:04:34.119 17:08:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.119 17:08:04 -- setup/common.sh@62 -- # wait 58704 00:04:34.119 17:08:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.119 17:08:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:34.119 17:08:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.119 17:08:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:34.119 17:08:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:34.119 17:08:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.378 17:08:04 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.378 17:08:04 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:34.378 17:08:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:34.378 17:08:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.378 17:08:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.378 17:08:04 -- setup/devices.sh@53 -- # local found=0 00:04:34.378 17:08:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.378 17:08:04 -- setup/devices.sh@56 -- # : 00:04:34.378 17:08:04 -- setup/devices.sh@59 -- # local pci status 00:04:34.378 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.378 17:08:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:34.378 17:08:04 -- setup/devices.sh@47 -- # setup output config 00:04:34.378 17:08:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.378 17:08:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.378 17:08:04 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.378 17:08:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:34.378 17:08:04 -- setup/devices.sh@63 -- # found=1 00:04:34.378 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.378 17:08:04 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.378 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.637 17:08:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.637 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.637 17:08:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.637 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.637 17:08:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.637 17:08:04 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:34.637 17:08:04 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.637 17:08:04 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.637 17:08:04 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.637 17:08:04 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:34.637 17:08:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.637 17:08:04 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.637 17:08:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.637 17:08:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.637 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.637 17:08:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.637 17:08:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.896 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.896 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:34.896 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.896 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.896 17:08:04 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:34.896 17:08:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:34.896 17:08:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.896 17:08:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:34.896 17:08:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:35.155 17:08:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.155 17:08:04 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.155 17:08:04 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.155 17:08:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:35.155 17:08:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.155 17:08:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.155 17:08:04 -- setup/devices.sh@53 -- # local found=0 00:04:35.155 17:08:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.155 17:08:04 -- setup/devices.sh@56 -- # : 00:04:35.155 17:08:04 -- setup/devices.sh@59 -- # local pci status 00:04:35.155 17:08:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.155 17:08:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.155 17:08:04 -- setup/devices.sh@47 -- # setup output config 00:04:35.155 17:08:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.155 17:08:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.155 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.155 17:08:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.155 17:08:05 -- setup/devices.sh@63 -- # found=1 00:04:35.155 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.155 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.155 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.413 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.413 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.413 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.413 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.413 17:08:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.413 17:08:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:35.413 17:08:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.413 17:08:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.413 17:08:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:35.413 17:08:05 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.413 17:08:05 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:35.413 17:08:05 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.413 17:08:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:35.413 17:08:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:35.413 17:08:05 -- setup/devices.sh@51 -- # local test_file= 00:04:35.413 17:08:05 -- setup/devices.sh@53 -- # local found=0 00:04:35.413 17:08:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.413 17:08:05 -- setup/devices.sh@59 -- # local pci status 00:04:35.413 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.413 17:08:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.413 17:08:05 -- setup/devices.sh@47 -- # setup output config 00:04:35.413 17:08:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.413 17:08:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.980 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:35.980 17:08:05 -- setup/devices.sh@63 -- # found=1 00:04:35.980 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.980 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.980 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.980 17:08:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.980 17:08:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.980 17:08:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.980 17:08:05 -- setup/devices.sh@68 -- # return 0 00:04:35.980 17:08:05 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:35.980 17:08:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.980 17:08:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.980 17:08:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.239 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.239 00:04:36.239 real 0m3.972s 00:04:36.239 user 0m0.687s 00:04:36.239 sys 0m1.025s 00:04:36.239 17:08:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.239 17:08:05 -- common/autotest_common.sh@10 -- # set +x 00:04:36.239 ************************************ 00:04:36.239 END TEST nvme_mount 00:04:36.239 ************************************ 00:04:36.239 17:08:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.239 17:08:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.239 17:08:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.239 17:08:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.239 ************************************ 00:04:36.239 START TEST dm_mount 00:04:36.239 ************************************ 00:04:36.239 17:08:06 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:36.239 17:08:06 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.239 17:08:06 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.239 17:08:06 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.239 17:08:06 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.239 17:08:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.239 17:08:06 -- setup/common.sh@40 -- # local part_no=2 00:04:36.239 17:08:06 -- setup/common.sh@41 -- # local size=1073741824 00:04:36.239 17:08:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.239 17:08:06 -- setup/common.sh@44 -- # parts=() 00:04:36.239 17:08:06 -- setup/common.sh@44 -- # local parts 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.239 17:08:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.239 17:08:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.239 17:08:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.239 17:08:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:36.239 17:08:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.239 17:08:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:37.173 Creating new GPT entries in memory. 00:04:37.173 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.173 other utilities. 00:04:37.173 17:08:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.173 17:08:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.173 17:08:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.173 17:08:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.173 17:08:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:38.555 Creating new GPT entries in memory. 00:04:38.555 The operation has completed successfully. 00:04:38.555 17:08:08 -- setup/common.sh@57 -- # (( part++ )) 00:04:38.555 17:08:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.555 17:08:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.555 17:08:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.555 17:08:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:39.490 The operation has completed successfully. 00:04:39.490 17:08:09 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.490 17:08:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.490 17:08:09 -- setup/common.sh@62 -- # wait 59141 00:04:39.490 17:08:09 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.490 17:08:09 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.490 17:08:09 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.490 17:08:09 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.490 17:08:09 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.491 17:08:09 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.491 17:08:09 -- setup/devices.sh@161 -- # break 00:04:39.491 17:08:09 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.491 17:08:09 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.491 17:08:09 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:39.491 17:08:09 -- setup/devices.sh@166 -- # dm=dm-0 00:04:39.491 17:08:09 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:39.491 17:08:09 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:39.491 17:08:09 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.491 17:08:09 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:39.491 17:08:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.491 17:08:09 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.491 17:08:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.491 17:08:09 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.491 17:08:09 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.491 17:08:09 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:39.491 17:08:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.491 17:08:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.491 17:08:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.491 17:08:09 -- setup/devices.sh@53 -- # local found=0 00:04:39.491 17:08:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.491 17:08:09 -- setup/devices.sh@56 -- # : 00:04:39.491 17:08:09 -- setup/devices.sh@59 -- # local pci status 00:04:39.491 17:08:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:39.491 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.491 17:08:09 -- setup/devices.sh@47 -- # setup output config 00:04:39.491 17:08:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.491 17:08:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.491 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.491 17:08:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:39.491 17:08:09 -- setup/devices.sh@63 -- # found=1 00:04:39.491 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.491 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.491 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.750 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.750 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.750 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:39.750 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.750 17:08:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.750 17:08:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:39.750 17:08:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.750 17:08:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.750 17:08:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.750 17:08:09 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.750 17:08:09 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:39.750 17:08:09 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:39.750 17:08:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:39.750 17:08:09 -- setup/devices.sh@50 -- # local mount_point= 00:04:39.750 17:08:09 -- setup/devices.sh@51 -- # local test_file= 00:04:39.750 17:08:09 -- setup/devices.sh@53 -- # local found=0 00:04:39.750 17:08:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.750 17:08:09 -- setup/devices.sh@59 -- # local pci status 00:04:39.750 17:08:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:39.750 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.750 17:08:09 -- setup/devices.sh@47 -- # setup output config 00:04:39.750 17:08:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.750 17:08:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.010 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.010 17:08:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:40.010 17:08:09 -- setup/devices.sh@63 -- # found=1 00:04:40.010 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.010 17:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.010 17:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.268 17:08:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.268 17:08:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.268 17:08:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:40.268 17:08:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.268 17:08:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.268 17:08:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:40.268 17:08:10 -- setup/devices.sh@68 -- # return 0 00:04:40.268 17:08:10 -- setup/devices.sh@187 -- # cleanup_dm 00:04:40.268 17:08:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.268 17:08:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:40.268 17:08:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:40.269 17:08:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.269 17:08:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:40.269 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:40.527 17:08:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:40.527 17:08:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:40.527 00:04:40.527 real 0m4.167s 00:04:40.527 user 0m0.462s 00:04:40.527 sys 0m0.660s 00:04:40.527 17:08:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.527 ************************************ 00:04:40.527 END TEST dm_mount 00:04:40.527 ************************************ 00:04:40.527 17:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.527 17:08:10 -- setup/devices.sh@1 -- # cleanup 00:04:40.527 17:08:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:40.527 17:08:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.527 17:08:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.527 17:08:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:40.527 17:08:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.527 17:08:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:40.784 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:40.784 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:40.784 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:40.784 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:40.784 17:08:10 -- setup/devices.sh@12 -- # cleanup_dm 00:04:40.785 17:08:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:40.785 17:08:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:40.785 17:08:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:40.785 17:08:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:40.785 17:08:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:40.785 17:08:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:40.785 00:04:40.785 real 0m9.755s 00:04:40.785 user 0m1.793s 00:04:40.785 sys 0m2.338s 00:04:40.785 17:08:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.785 17:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.785 ************************************ 00:04:40.785 END TEST devices 00:04:40.785 ************************************ 00:04:40.785 00:04:40.785 real 0m22.456s 00:04:40.785 user 0m7.386s 00:04:40.785 sys 0m9.325s 00:04:40.785 17:08:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.785 17:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.785 ************************************ 00:04:40.785 END TEST setup.sh 00:04:40.785 ************************************ 00:04:40.785 17:08:10 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.351 Hugepages 00:04:41.351 node hugesize free / total 00:04:41.351 node0 1048576kB 0 / 0 00:04:41.351 node0 2048kB 2048 / 2048 00:04:41.351 00:04:41.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.609 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:41.609 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:41.609 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:41.609 17:08:11 -- spdk/autotest.sh@130 -- # uname -s 00:04:41.609 17:08:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:41.609 17:08:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:41.609 17:08:11 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.546 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.546 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.546 17:08:12 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:43.481 17:08:13 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:43.481 17:08:13 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:43.481 17:08:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.481 17:08:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:43.481 17:08:13 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:43.481 17:08:13 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:43.481 17:08:13 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.481 17:08:13 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.481 17:08:13 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:43.481 17:08:13 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:04:43.481 17:08:13 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:43.481 17:08:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.049 Waiting for block devices as requested 00:04:44.049 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.049 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.049 17:08:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.049 17:08:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:44.049 17:08:14 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.049 17:08:14 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:04:44.049 17:08:14 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.049 17:08:14 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.308 17:08:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.308 17:08:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.308 17:08:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1543 -- # continue 00:04:44.308 17:08:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.308 17:08:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.308 17:08:14 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:04:44.308 17:08:14 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.308 17:08:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.308 17:08:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.308 17:08:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.308 17:08:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.308 17:08:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.308 17:08:14 -- common/autotest_common.sh@1543 -- # continue 00:04:44.308 17:08:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:44.308 17:08:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:44.308 17:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.308 17:08:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:44.308 17:08:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:44.308 17:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.308 17:08:14 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.876 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.134 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.135 17:08:14 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:45.135 17:08:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:45.135 17:08:14 -- common/autotest_common.sh@10 -- # set +x 00:04:45.135 17:08:14 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:45.135 17:08:14 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:45.135 17:08:14 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.135 17:08:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:45.135 17:08:14 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:45.135 17:08:14 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:45.135 17:08:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:45.135 17:08:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:45.135 17:08:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.135 17:08:15 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.135 17:08:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:45.135 17:08:15 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:04:45.135 17:08:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.135 17:08:15 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:45.135 17:08:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:45.135 17:08:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.135 17:08:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.135 17:08:15 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:45.135 17:08:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:45.135 17:08:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.135 17:08:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.135 17:08:15 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:45.135 17:08:15 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:45.135 17:08:15 -- common/autotest_common.sh@1579 -- # return 0 00:04:45.135 17:08:15 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:45.135 17:08:15 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:45.135 17:08:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.135 17:08:15 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.135 17:08:15 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:45.135 17:08:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:45.135 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:45.135 17:08:15 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.135 17:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.135 17:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.135 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:45.393 ************************************ 00:04:45.393 START TEST env 00:04:45.393 ************************************ 00:04:45.393 17:08:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.393 * Looking for test storage... 00:04:45.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:45.393 17:08:15 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.393 17:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.393 17:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.393 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:45.393 ************************************ 00:04:45.393 START TEST env_memory 00:04:45.393 ************************************ 00:04:45.393 17:08:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.393 00:04:45.394 00:04:45.394 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.394 http://cunit.sourceforge.net/ 00:04:45.394 00:04:45.394 00:04:45.394 Suite: memory 00:04:45.394 Test: alloc and free memory map ...[2024-04-25 17:08:15.363932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.652 passed 00:04:45.652 Test: mem map translation ...[2024-04-25 17:08:15.395394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.652 [2024-04-25 17:08:15.395622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.652 [2024-04-25 17:08:15.395922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.652 [2024-04-25 17:08:15.396163] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.652 passed 00:04:45.652 Test: mem map registration ...[2024-04-25 17:08:15.460587] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:45.652 [2024-04-25 17:08:15.460851] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:45.652 passed 00:04:45.652 Test: mem map adjacent registrations ...passed 00:04:45.652 00:04:45.652 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.652 suites 1 1 n/a 0 0 00:04:45.652 tests 4 4 4 0 0 00:04:45.652 asserts 152 152 152 0 n/a 00:04:45.652 00:04:45.652 Elapsed time = 0.220 seconds 00:04:45.652 00:04:45.652 real 0m0.236s 00:04:45.652 user 0m0.219s 00:04:45.652 sys 0m0.012s 00:04:45.652 17:08:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.652 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:45.652 ************************************ 00:04:45.652 END TEST env_memory 00:04:45.652 ************************************ 00:04:45.652 17:08:15 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:45.652 17:08:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.652 17:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.652 17:08:15 -- common/autotest_common.sh@10 -- # set +x 00:04:45.911 ************************************ 00:04:45.911 START TEST env_vtophys 00:04:45.911 ************************************ 00:04:45.911 17:08:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:45.911 EAL: lib.eal log level changed from notice to debug 00:04:45.911 EAL: Detected lcore 0 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 1 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 2 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 3 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 4 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 5 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 6 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 7 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 8 as core 0 on socket 0 00:04:45.911 EAL: Detected lcore 9 as core 0 on socket 0 00:04:45.911 EAL: Maximum logical cores by configuration: 128 00:04:45.911 EAL: Detected CPU lcores: 10 00:04:45.911 EAL: Detected NUMA nodes: 1 00:04:45.911 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:45.911 EAL: Detected shared linkage of DPDK 00:04:45.911 EAL: No shared files mode enabled, IPC will be disabled 00:04:45.911 EAL: Selected IOVA mode 'PA' 00:04:45.911 EAL: Probing VFIO support... 00:04:45.911 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:45.911 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:45.911 EAL: Ask a virtual area of 0x2e000 bytes 00:04:45.911 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:45.911 EAL: Setting up physically contiguous memory... 00:04:45.911 EAL: Setting maximum number of open files to 524288 00:04:45.911 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:45.911 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:45.911 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.911 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:45.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.911 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.911 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:45.911 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:45.911 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.911 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:45.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.911 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.911 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:45.911 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:45.911 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.911 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:45.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.911 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.911 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:45.911 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:45.911 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.911 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:45.911 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.911 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.911 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:45.911 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:45.911 EAL: Hugepages will be freed exactly as allocated. 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: TSC frequency is ~2200000 KHz 00:04:45.911 EAL: Main lcore 0 is ready (tid=7f716b258a00;cpuset=[0]) 00:04:45.911 EAL: Trying to obtain current memory policy. 00:04:45.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.911 EAL: Restoring previous memory policy: 0 00:04:45.911 EAL: request: mp_malloc_sync 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: Heap on socket 0 was expanded by 2MB 00:04:45.911 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:45.911 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:45.911 EAL: Mem event callback 'spdk:(nil)' registered 00:04:45.911 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:45.911 00:04:45.911 00:04:45.911 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.911 http://cunit.sourceforge.net/ 00:04:45.911 00:04:45.911 00:04:45.911 Suite: components_suite 00:04:45.911 Test: vtophys_malloc_test ...passed 00:04:45.911 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:45.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.911 EAL: Restoring previous memory policy: 4 00:04:45.911 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.911 EAL: request: mp_malloc_sync 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: Heap on socket 0 was expanded by 4MB 00:04:45.911 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.911 EAL: request: mp_malloc_sync 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: Heap on socket 0 was shrunk by 4MB 00:04:45.911 EAL: Trying to obtain current memory policy. 00:04:45.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.911 EAL: Restoring previous memory policy: 4 00:04:45.911 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.911 EAL: request: mp_malloc_sync 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: Heap on socket 0 was expanded by 6MB 00:04:45.911 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.911 EAL: request: mp_malloc_sync 00:04:45.911 EAL: No shared files mode enabled, IPC is disabled 00:04:45.911 EAL: Heap on socket 0 was shrunk by 6MB 00:04:45.911 EAL: Trying to obtain current memory policy. 00:04:45.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.912 EAL: Restoring previous memory policy: 4 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was expanded by 10MB 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was shrunk by 10MB 00:04:45.912 EAL: Trying to obtain current memory policy. 00:04:45.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.912 EAL: Restoring previous memory policy: 4 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was expanded by 18MB 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was shrunk by 18MB 00:04:45.912 EAL: Trying to obtain current memory policy. 00:04:45.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.912 EAL: Restoring previous memory policy: 4 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was expanded by 34MB 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was shrunk by 34MB 00:04:45.912 EAL: Trying to obtain current memory policy. 00:04:45.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.912 EAL: Restoring previous memory policy: 4 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.912 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.912 EAL: request: mp_malloc_sync 00:04:45.912 EAL: No shared files mode enabled, IPC is disabled 00:04:45.912 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.912 EAL: Trying to obtain current memory policy. 00:04:45.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.171 EAL: Restoring previous memory policy: 4 00:04:46.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.171 EAL: request: mp_malloc_sync 00:04:46.171 EAL: No shared files mode enabled, IPC is disabled 00:04:46.171 EAL: Heap on socket 0 was expanded by 130MB 00:04:46.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.171 EAL: request: mp_malloc_sync 00:04:46.171 EAL: No shared files mode enabled, IPC is disabled 00:04:46.171 EAL: Heap on socket 0 was shrunk by 130MB 00:04:46.171 EAL: Trying to obtain current memory policy. 00:04:46.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.171 EAL: Restoring previous memory policy: 4 00:04:46.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.171 EAL: request: mp_malloc_sync 00:04:46.171 EAL: No shared files mode enabled, IPC is disabled 00:04:46.171 EAL: Heap on socket 0 was expanded by 258MB 00:04:46.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.171 EAL: request: mp_malloc_sync 00:04:46.171 EAL: No shared files mode enabled, IPC is disabled 00:04:46.171 EAL: Heap on socket 0 was shrunk by 258MB 00:04:46.171 EAL: Trying to obtain current memory policy. 00:04:46.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.171 EAL: Restoring previous memory policy: 4 00:04:46.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.171 EAL: request: mp_malloc_sync 00:04:46.171 EAL: No shared files mode enabled, IPC is disabled 00:04:46.171 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.430 EAL: request: mp_malloc_sync 00:04:46.430 EAL: No shared files mode enabled, IPC is disabled 00:04:46.430 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.430 EAL: Trying to obtain current memory policy. 00:04:46.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.430 EAL: Restoring previous memory policy: 4 00:04:46.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.430 EAL: request: mp_malloc_sync 00:04:46.430 EAL: No shared files mode enabled, IPC is disabled 00:04:46.430 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.695 passed 00:04:46.695 00:04:46.695 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.695 suites 1 1 n/a 0 0 00:04:46.695 tests 2 2 2 0 0 00:04:46.695 asserts 5274 5274 5274 0 n/a 00:04:46.695 00:04:46.695 Elapsed time = 0.688 seconds 00:04:46.695 EAL: request: mp_malloc_sync 00:04:46.695 EAL: No shared files mode enabled, IPC is disabled 00:04:46.695 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.695 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.695 EAL: request: mp_malloc_sync 00:04:46.695 EAL: No shared files mode enabled, IPC is disabled 00:04:46.695 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.695 EAL: No shared files mode enabled, IPC is disabled 00:04:46.695 EAL: No shared files mode enabled, IPC is disabled 00:04:46.695 EAL: No shared files mode enabled, IPC is disabled 00:04:46.695 00:04:46.695 real 0m0.881s 00:04:46.695 user 0m0.448s 00:04:46.695 sys 0m0.305s 00:04:46.695 17:08:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.695 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.695 ************************************ 00:04:46.695 END TEST env_vtophys 00:04:46.695 ************************************ 00:04:46.695 17:08:16 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.695 17:08:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.695 17:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.695 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 START TEST env_pci 00:04:46.953 ************************************ 00:04:46.953 17:08:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:46.953 00:04:46.953 00:04:46.953 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.953 http://cunit.sourceforge.net/ 00:04:46.953 00:04:46.953 00:04:46.953 Suite: pci 00:04:46.953 Test: pci_hook ...[2024-04-25 17:08:16.699142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60343 has claimed it 00:04:46.953 passed 00:04:46.953 00:04:46.953 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.953 suites 1 1 n/a 0 0 00:04:46.953 tests 1 1 1 0 0 00:04:46.953 asserts 25 25 25 0 n/a 00:04:46.953 00:04:46.953 Elapsed time = 0.002 seconds 00:04:46.953 EAL: Cannot find device (10000:00:01.0) 00:04:46.953 EAL: Failed to attach device on primary process 00:04:46.953 00:04:46.953 real 0m0.023s 00:04:46.953 user 0m0.011s 00:04:46.953 sys 0m0.011s 00:04:46.953 17:08:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.953 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 END TEST env_pci 00:04:46.953 ************************************ 00:04:46.953 17:08:16 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.953 17:08:16 -- env/env.sh@15 -- # uname 00:04:46.953 17:08:16 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.953 17:08:16 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.953 17:08:16 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.953 17:08:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:46.953 17:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.953 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:46.953 ************************************ 00:04:46.953 START TEST env_dpdk_post_init 00:04:46.953 ************************************ 00:04:46.953 17:08:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.953 EAL: Detected CPU lcores: 10 00:04:46.953 EAL: Detected NUMA nodes: 1 00:04:46.953 EAL: Detected shared linkage of DPDK 00:04:46.953 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.953 EAL: Selected IOVA mode 'PA' 00:04:47.211 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:47.211 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:47.211 Starting DPDK initialization... 00:04:47.211 Starting SPDK post initialization... 00:04:47.211 SPDK NVMe probe 00:04:47.211 Attaching to 0000:00:10.0 00:04:47.211 Attaching to 0000:00:11.0 00:04:47.211 Attached to 0000:00:10.0 00:04:47.211 Attached to 0000:00:11.0 00:04:47.211 Cleaning up... 00:04:47.211 00:04:47.211 real 0m0.173s 00:04:47.211 user 0m0.040s 00:04:47.211 sys 0m0.033s 00:04:47.211 17:08:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.211 17:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:47.211 ************************************ 00:04:47.211 END TEST env_dpdk_post_init 00:04:47.211 ************************************ 00:04:47.211 17:08:17 -- env/env.sh@26 -- # uname 00:04:47.211 17:08:17 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:47.211 17:08:17 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.211 17:08:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.211 17:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.211 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.211 ************************************ 00:04:47.211 START TEST env_mem_callbacks 00:04:47.211 ************************************ 00:04:47.211 17:08:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:47.211 EAL: Detected CPU lcores: 10 00:04:47.211 EAL: Detected NUMA nodes: 1 00:04:47.211 EAL: Detected shared linkage of DPDK 00:04:47.211 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.211 EAL: Selected IOVA mode 'PA' 00:04:47.469 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.469 00:04:47.469 00:04:47.469 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.469 http://cunit.sourceforge.net/ 00:04:47.469 00:04:47.469 00:04:47.469 Suite: memory 00:04:47.469 Test: test ... 00:04:47.469 register 0x200000200000 2097152 00:04:47.469 malloc 3145728 00:04:47.469 register 0x200000400000 4194304 00:04:47.470 buf 0x200000500000 len 3145728 PASSED 00:04:47.470 malloc 64 00:04:47.470 buf 0x2000004fff40 len 64 PASSED 00:04:47.470 malloc 4194304 00:04:47.470 register 0x200000800000 6291456 00:04:47.470 buf 0x200000a00000 len 4194304 PASSED 00:04:47.470 free 0x200000500000 3145728 00:04:47.470 free 0x2000004fff40 64 00:04:47.470 unregister 0x200000400000 4194304 PASSED 00:04:47.470 free 0x200000a00000 4194304 00:04:47.470 unregister 0x200000800000 6291456 PASSED 00:04:47.470 malloc 8388608 00:04:47.470 register 0x200000400000 10485760 00:04:47.470 buf 0x200000600000 len 8388608 PASSED 00:04:47.470 free 0x200000600000 8388608 00:04:47.470 unregister 0x200000400000 10485760 PASSED 00:04:47.470 passed 00:04:47.470 00:04:47.470 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.470 suites 1 1 n/a 0 0 00:04:47.470 tests 1 1 1 0 0 00:04:47.470 asserts 15 15 15 0 n/a 00:04:47.470 00:04:47.470 Elapsed time = 0.006 seconds 00:04:47.470 00:04:47.470 real 0m0.139s 00:04:47.470 user 0m0.015s 00:04:47.470 sys 0m0.022s 00:04:47.470 17:08:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.470 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.470 ************************************ 00:04:47.470 END TEST env_mem_callbacks 00:04:47.470 ************************************ 00:04:47.470 00:04:47.470 real 0m2.151s 00:04:47.470 user 0m0.981s 00:04:47.470 sys 0m0.739s 00:04:47.470 17:08:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.470 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.470 ************************************ 00:04:47.470 END TEST env 00:04:47.470 ************************************ 00:04:47.470 17:08:17 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:47.470 17:08:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.470 17:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.470 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.470 ************************************ 00:04:47.470 START TEST rpc 00:04:47.470 ************************************ 00:04:47.470 17:08:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:47.728 * Looking for test storage... 00:04:47.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.728 17:08:17 -- rpc/rpc.sh@65 -- # spdk_pid=60472 00:04:47.728 17:08:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.728 17:08:17 -- rpc/rpc.sh@67 -- # waitforlisten 60472 00:04:47.728 17:08:17 -- common/autotest_common.sh@817 -- # '[' -z 60472 ']' 00:04:47.728 17:08:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.728 17:08:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:47.728 17:08:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.728 17:08:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:47.728 17:08:17 -- common/autotest_common.sh@10 -- # set +x 00:04:47.728 17:08:17 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:47.728 [2024-04-25 17:08:17.573075] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:47.728 [2024-04-25 17:08:17.573220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60472 ] 00:04:47.985 [2024-04-25 17:08:17.709270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.985 [2024-04-25 17:08:17.763213] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:47.985 [2024-04-25 17:08:17.763281] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60472' to capture a snapshot of events at runtime. 00:04:47.985 [2024-04-25 17:08:17.763307] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:47.985 [2024-04-25 17:08:17.763314] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:47.985 [2024-04-25 17:08:17.763320] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60472 for offline analysis/debug. 00:04:47.985 [2024-04-25 17:08:17.763359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.550 17:08:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.550 17:08:18 -- common/autotest_common.sh@850 -- # return 0 00:04:48.550 17:08:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.550 17:08:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.550 17:08:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:48.550 17:08:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:48.550 17:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.550 17:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.550 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 ************************************ 00:04:48.809 START TEST rpc_integrity 00:04:48.809 ************************************ 00:04:48.809 17:08:18 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:48.809 17:08:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:48.809 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.809 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.809 17:08:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:48.809 17:08:18 -- rpc/rpc.sh@13 -- # jq length 00:04:48.809 17:08:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:48.809 17:08:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:48.809 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.809 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.809 17:08:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:48.809 17:08:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:48.809 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.809 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.809 17:08:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:48.809 { 00:04:48.809 "aliases": [ 00:04:48.809 "f7f89473-8d7c-4607-83cd-6c1244e60bf1" 00:04:48.809 ], 00:04:48.809 "assigned_rate_limits": { 00:04:48.809 "r_mbytes_per_sec": 0, 00:04:48.809 "rw_ios_per_sec": 0, 00:04:48.809 "rw_mbytes_per_sec": 0, 00:04:48.809 "w_mbytes_per_sec": 0 00:04:48.809 }, 00:04:48.809 "block_size": 512, 00:04:48.809 "claimed": false, 00:04:48.809 "driver_specific": {}, 00:04:48.809 "memory_domains": [ 00:04:48.809 { 00:04:48.809 "dma_device_id": "system", 00:04:48.809 "dma_device_type": 1 00:04:48.809 }, 00:04:48.809 { 00:04:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.809 "dma_device_type": 2 00:04:48.809 } 00:04:48.809 ], 00:04:48.809 "name": "Malloc0", 00:04:48.809 "num_blocks": 16384, 00:04:48.809 "product_name": "Malloc disk", 00:04:48.809 "supported_io_types": { 00:04:48.809 "abort": true, 00:04:48.809 "compare": false, 00:04:48.809 "compare_and_write": false, 00:04:48.809 "flush": true, 00:04:48.809 "nvme_admin": false, 00:04:48.809 "nvme_io": false, 00:04:48.809 "read": true, 00:04:48.809 "reset": true, 00:04:48.809 "unmap": true, 00:04:48.809 "write": true, 00:04:48.809 "write_zeroes": true 00:04:48.809 }, 00:04:48.809 "uuid": "f7f89473-8d7c-4607-83cd-6c1244e60bf1", 00:04:48.809 "zoned": false 00:04:48.809 } 00:04:48.809 ]' 00:04:48.809 17:08:18 -- rpc/rpc.sh@17 -- # jq length 00:04:48.809 17:08:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:48.809 17:08:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:48.809 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.809 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.809 [2024-04-25 17:08:18.714910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:48.809 [2024-04-25 17:08:18.714985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:48.809 [2024-04-25 17:08:18.715002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17f4ed0 00:04:48.809 [2024-04-25 17:08:18.715011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:48.809 [2024-04-25 17:08:18.716565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:48.809 [2024-04-25 17:08:18.716600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:48.809 Passthru0 00:04:48.810 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.810 17:08:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:48.810 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:48.810 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.810 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:48.810 17:08:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:48.810 { 00:04:48.810 "aliases": [ 00:04:48.810 "f7f89473-8d7c-4607-83cd-6c1244e60bf1" 00:04:48.810 ], 00:04:48.810 "assigned_rate_limits": { 00:04:48.810 "r_mbytes_per_sec": 0, 00:04:48.810 "rw_ios_per_sec": 0, 00:04:48.810 "rw_mbytes_per_sec": 0, 00:04:48.810 "w_mbytes_per_sec": 0 00:04:48.810 }, 00:04:48.810 "block_size": 512, 00:04:48.810 "claim_type": "exclusive_write", 00:04:48.810 "claimed": true, 00:04:48.810 "driver_specific": {}, 00:04:48.810 "memory_domains": [ 00:04:48.810 { 00:04:48.810 "dma_device_id": "system", 00:04:48.810 "dma_device_type": 1 00:04:48.810 }, 00:04:48.810 { 00:04:48.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.810 "dma_device_type": 2 00:04:48.810 } 00:04:48.810 ], 00:04:48.810 "name": "Malloc0", 00:04:48.810 "num_blocks": 16384, 00:04:48.810 "product_name": "Malloc disk", 00:04:48.810 "supported_io_types": { 00:04:48.810 "abort": true, 00:04:48.810 "compare": false, 00:04:48.810 "compare_and_write": false, 00:04:48.810 "flush": true, 00:04:48.810 "nvme_admin": false, 00:04:48.810 "nvme_io": false, 00:04:48.810 "read": true, 00:04:48.810 "reset": true, 00:04:48.810 "unmap": true, 00:04:48.810 "write": true, 00:04:48.810 "write_zeroes": true 00:04:48.810 }, 00:04:48.810 "uuid": "f7f89473-8d7c-4607-83cd-6c1244e60bf1", 00:04:48.810 "zoned": false 00:04:48.810 }, 00:04:48.810 { 00:04:48.810 "aliases": [ 00:04:48.810 "e9ace76b-d5de-58e8-9cbd-84940bacc550" 00:04:48.810 ], 00:04:48.810 "assigned_rate_limits": { 00:04:48.810 "r_mbytes_per_sec": 0, 00:04:48.810 "rw_ios_per_sec": 0, 00:04:48.810 "rw_mbytes_per_sec": 0, 00:04:48.810 "w_mbytes_per_sec": 0 00:04:48.810 }, 00:04:48.810 "block_size": 512, 00:04:48.810 "claimed": false, 00:04:48.810 "driver_specific": { 00:04:48.810 "passthru": { 00:04:48.810 "base_bdev_name": "Malloc0", 00:04:48.810 "name": "Passthru0" 00:04:48.810 } 00:04:48.810 }, 00:04:48.810 "memory_domains": [ 00:04:48.810 { 00:04:48.810 "dma_device_id": "system", 00:04:48.810 "dma_device_type": 1 00:04:48.810 }, 00:04:48.810 { 00:04:48.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:48.810 "dma_device_type": 2 00:04:48.810 } 00:04:48.810 ], 00:04:48.810 "name": "Passthru0", 00:04:48.810 "num_blocks": 16384, 00:04:48.810 "product_name": "passthru", 00:04:48.810 "supported_io_types": { 00:04:48.810 "abort": true, 00:04:48.810 "compare": false, 00:04:48.810 "compare_and_write": false, 00:04:48.810 "flush": true, 00:04:48.810 "nvme_admin": false, 00:04:48.810 "nvme_io": false, 00:04:48.810 "read": true, 00:04:48.810 "reset": true, 00:04:48.810 "unmap": true, 00:04:48.810 "write": true, 00:04:48.810 "write_zeroes": true 00:04:48.810 }, 00:04:48.810 "uuid": "e9ace76b-d5de-58e8-9cbd-84940bacc550", 00:04:48.810 "zoned": false 00:04:48.810 } 00:04:48.810 ]' 00:04:48.810 17:08:18 -- rpc/rpc.sh@21 -- # jq length 00:04:49.069 17:08:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:49.069 17:08:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:49.069 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.069 17:08:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:49.069 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.069 17:08:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:49.069 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 17:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.069 17:08:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:49.069 17:08:18 -- rpc/rpc.sh@26 -- # jq length 00:04:49.069 17:08:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:49.069 00:04:49.069 real 0m0.312s 00:04:49.069 user 0m0.200s 00:04:49.069 sys 0m0.042s 00:04:49.069 17:08:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.069 ************************************ 00:04:49.069 END TEST rpc_integrity 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 ************************************ 00:04:49.069 17:08:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:49.069 17:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.069 17:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 ************************************ 00:04:49.069 START TEST rpc_plugins 00:04:49.069 ************************************ 00:04:49.069 17:08:18 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:49.069 17:08:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:49.069 17:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.069 17:08:18 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.069 17:08:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:49.069 17:08:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:49.069 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.069 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.069 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.069 17:08:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:49.069 { 00:04:49.069 "aliases": [ 00:04:49.069 "57792455-2c00-437c-9d45-fce4c9f350e0" 00:04:49.069 ], 00:04:49.069 "assigned_rate_limits": { 00:04:49.069 "r_mbytes_per_sec": 0, 00:04:49.069 "rw_ios_per_sec": 0, 00:04:49.069 "rw_mbytes_per_sec": 0, 00:04:49.069 "w_mbytes_per_sec": 0 00:04:49.069 }, 00:04:49.069 "block_size": 4096, 00:04:49.069 "claimed": false, 00:04:49.069 "driver_specific": {}, 00:04:49.069 "memory_domains": [ 00:04:49.069 { 00:04:49.069 "dma_device_id": "system", 00:04:49.069 "dma_device_type": 1 00:04:49.069 }, 00:04:49.069 { 00:04:49.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.069 "dma_device_type": 2 00:04:49.069 } 00:04:49.069 ], 00:04:49.069 "name": "Malloc1", 00:04:49.069 "num_blocks": 256, 00:04:49.069 "product_name": "Malloc disk", 00:04:49.069 "supported_io_types": { 00:04:49.069 "abort": true, 00:04:49.069 "compare": false, 00:04:49.069 "compare_and_write": false, 00:04:49.069 "flush": true, 00:04:49.069 "nvme_admin": false, 00:04:49.069 "nvme_io": false, 00:04:49.069 "read": true, 00:04:49.069 "reset": true, 00:04:49.069 "unmap": true, 00:04:49.069 "write": true, 00:04:49.069 "write_zeroes": true 00:04:49.069 }, 00:04:49.069 "uuid": "57792455-2c00-437c-9d45-fce4c9f350e0", 00:04:49.069 "zoned": false 00:04:49.069 } 00:04:49.069 ]' 00:04:49.069 17:08:19 -- rpc/rpc.sh@32 -- # jq length 00:04:49.328 17:08:19 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:49.328 17:08:19 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:49.328 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.328 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.328 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.328 17:08:19 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:49.328 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.328 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.328 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.328 17:08:19 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:49.328 17:08:19 -- rpc/rpc.sh@36 -- # jq length 00:04:49.328 17:08:19 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:49.328 00:04:49.328 real 0m0.158s 00:04:49.328 user 0m0.108s 00:04:49.328 sys 0m0.011s 00:04:49.328 17:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.328 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.328 ************************************ 00:04:49.328 END TEST rpc_plugins 00:04:49.328 ************************************ 00:04:49.328 17:08:19 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:49.328 17:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.328 17:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.328 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.328 ************************************ 00:04:49.328 START TEST rpc_trace_cmd_test 00:04:49.328 ************************************ 00:04:49.328 17:08:19 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:49.328 17:08:19 -- rpc/rpc.sh@40 -- # local info 00:04:49.328 17:08:19 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:49.328 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.328 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.328 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.328 17:08:19 -- rpc/rpc.sh@42 -- # info='{ 00:04:49.328 "bdev": { 00:04:49.328 "mask": "0x8", 00:04:49.328 "tpoint_mask": "0xffffffffffffffff" 00:04:49.328 }, 00:04:49.328 "bdev_nvme": { 00:04:49.328 "mask": "0x4000", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "blobfs": { 00:04:49.328 "mask": "0x80", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "dsa": { 00:04:49.328 "mask": "0x200", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "ftl": { 00:04:49.328 "mask": "0x40", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "iaa": { 00:04:49.328 "mask": "0x1000", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "iscsi_conn": { 00:04:49.328 "mask": "0x2", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "nvme_pcie": { 00:04:49.328 "mask": "0x800", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "nvme_tcp": { 00:04:49.328 "mask": "0x2000", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "nvmf_rdma": { 00:04:49.328 "mask": "0x10", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "nvmf_tcp": { 00:04:49.328 "mask": "0x20", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "scsi": { 00:04:49.328 "mask": "0x4", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "sock": { 00:04:49.328 "mask": "0x8000", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "thread": { 00:04:49.328 "mask": "0x400", 00:04:49.328 "tpoint_mask": "0x0" 00:04:49.328 }, 00:04:49.328 "tpoint_group_mask": "0x8", 00:04:49.328 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60472" 00:04:49.328 }' 00:04:49.328 17:08:19 -- rpc/rpc.sh@43 -- # jq length 00:04:49.587 17:08:19 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:49.587 17:08:19 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:49.587 17:08:19 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:49.587 17:08:19 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:49.587 17:08:19 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:49.587 17:08:19 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:49.587 17:08:19 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:49.587 17:08:19 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:49.587 17:08:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:49.587 00:04:49.587 real 0m0.289s 00:04:49.587 user 0m0.249s 00:04:49.587 sys 0m0.026s 00:04:49.587 17:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.587 ************************************ 00:04:49.587 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.587 END TEST rpc_trace_cmd_test 00:04:49.587 ************************************ 00:04:49.846 17:08:19 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:49.846 17:08:19 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:49.846 17:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.846 17:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.846 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.846 ************************************ 00:04:49.846 START TEST go_rpc 00:04:49.846 ************************************ 00:04:49.846 17:08:19 -- common/autotest_common.sh@1111 -- # go_rpc 00:04:49.846 17:08:19 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:49.846 17:08:19 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:49.846 17:08:19 -- rpc/rpc.sh@52 -- # jq length 00:04:49.846 17:08:19 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:49.846 17:08:19 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.846 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.846 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.846 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:49.846 17:08:19 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:49.846 17:08:19 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:49.846 17:08:19 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["5e4ef02d-dd6a-4b66-867d-ebb88f32a791"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"5e4ef02d-dd6a-4b66-867d-ebb88f32a791","zoned":false}]' 00:04:49.846 17:08:19 -- rpc/rpc.sh@57 -- # jq length 00:04:49.846 17:08:19 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:49.846 17:08:19 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:49.846 17:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:49.846 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:50.106 17:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.106 17:08:19 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:50.106 17:08:19 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:50.106 17:08:19 -- rpc/rpc.sh@61 -- # jq length 00:04:50.106 17:08:19 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:50.106 00:04:50.106 real 0m0.219s 00:04:50.106 user 0m0.150s 00:04:50.106 sys 0m0.033s 00:04:50.106 17:08:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.106 ************************************ 00:04:50.106 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:50.106 END TEST go_rpc 00:04:50.106 ************************************ 00:04:50.106 17:08:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.106 17:08:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.106 17:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.106 17:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.106 17:08:19 -- common/autotest_common.sh@10 -- # set +x 00:04:50.106 ************************************ 00:04:50.106 START TEST rpc_daemon_integrity 00:04:50.106 ************************************ 00:04:50.106 17:08:20 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:50.106 17:08:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.106 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.106 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.106 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.106 17:08:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.106 17:08:20 -- rpc/rpc.sh@13 -- # jq length 00:04:50.365 17:08:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.365 17:08:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:50.365 17:08:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.365 { 00:04:50.365 "aliases": [ 00:04:50.365 "db45db87-c71a-4a5b-adf2-3e8c57c6a281" 00:04:50.365 ], 00:04:50.365 "assigned_rate_limits": { 00:04:50.365 "r_mbytes_per_sec": 0, 00:04:50.365 "rw_ios_per_sec": 0, 00:04:50.365 "rw_mbytes_per_sec": 0, 00:04:50.365 "w_mbytes_per_sec": 0 00:04:50.365 }, 00:04:50.365 "block_size": 512, 00:04:50.365 "claimed": false, 00:04:50.365 "driver_specific": {}, 00:04:50.365 "memory_domains": [ 00:04:50.365 { 00:04:50.365 "dma_device_id": "system", 00:04:50.365 "dma_device_type": 1 00:04:50.365 }, 00:04:50.365 { 00:04:50.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.365 "dma_device_type": 2 00:04:50.365 } 00:04:50.365 ], 00:04:50.365 "name": "Malloc3", 00:04:50.365 "num_blocks": 16384, 00:04:50.365 "product_name": "Malloc disk", 00:04:50.365 "supported_io_types": { 00:04:50.365 "abort": true, 00:04:50.365 "compare": false, 00:04:50.365 "compare_and_write": false, 00:04:50.365 "flush": true, 00:04:50.365 "nvme_admin": false, 00:04:50.365 "nvme_io": false, 00:04:50.365 "read": true, 00:04:50.365 "reset": true, 00:04:50.365 "unmap": true, 00:04:50.365 "write": true, 00:04:50.365 "write_zeroes": true 00:04:50.365 }, 00:04:50.365 "uuid": "db45db87-c71a-4a5b-adf2-3e8c57c6a281", 00:04:50.365 "zoned": false 00:04:50.365 } 00:04:50.365 ]' 00:04:50.365 17:08:20 -- rpc/rpc.sh@17 -- # jq length 00:04:50.365 17:08:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.365 17:08:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 [2024-04-25 17:08:20.171487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:50.365 [2024-04-25 17:08:20.171558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.365 [2024-04-25 17:08:20.171575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15ef400 00:04:50.365 [2024-04-25 17:08:20.171584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.365 [2024-04-25 17:08:20.173179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.365 [2024-04-25 17:08:20.173225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.365 Passthru0 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.365 { 00:04:50.365 "aliases": [ 00:04:50.365 "db45db87-c71a-4a5b-adf2-3e8c57c6a281" 00:04:50.365 ], 00:04:50.365 "assigned_rate_limits": { 00:04:50.365 "r_mbytes_per_sec": 0, 00:04:50.365 "rw_ios_per_sec": 0, 00:04:50.365 "rw_mbytes_per_sec": 0, 00:04:50.365 "w_mbytes_per_sec": 0 00:04:50.365 }, 00:04:50.365 "block_size": 512, 00:04:50.365 "claim_type": "exclusive_write", 00:04:50.365 "claimed": true, 00:04:50.365 "driver_specific": {}, 00:04:50.365 "memory_domains": [ 00:04:50.365 { 00:04:50.365 "dma_device_id": "system", 00:04:50.365 "dma_device_type": 1 00:04:50.365 }, 00:04:50.365 { 00:04:50.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.365 "dma_device_type": 2 00:04:50.365 } 00:04:50.365 ], 00:04:50.365 "name": "Malloc3", 00:04:50.365 "num_blocks": 16384, 00:04:50.365 "product_name": "Malloc disk", 00:04:50.365 "supported_io_types": { 00:04:50.365 "abort": true, 00:04:50.365 "compare": false, 00:04:50.365 "compare_and_write": false, 00:04:50.365 "flush": true, 00:04:50.365 "nvme_admin": false, 00:04:50.365 "nvme_io": false, 00:04:50.365 "read": true, 00:04:50.365 "reset": true, 00:04:50.365 "unmap": true, 00:04:50.365 "write": true, 00:04:50.365 "write_zeroes": true 00:04:50.365 }, 00:04:50.365 "uuid": "db45db87-c71a-4a5b-adf2-3e8c57c6a281", 00:04:50.365 "zoned": false 00:04:50.365 }, 00:04:50.365 { 00:04:50.365 "aliases": [ 00:04:50.365 "a32b1c8d-3c90-588e-be51-4dfd0c8d7e2b" 00:04:50.365 ], 00:04:50.365 "assigned_rate_limits": { 00:04:50.365 "r_mbytes_per_sec": 0, 00:04:50.365 "rw_ios_per_sec": 0, 00:04:50.365 "rw_mbytes_per_sec": 0, 00:04:50.365 "w_mbytes_per_sec": 0 00:04:50.365 }, 00:04:50.365 "block_size": 512, 00:04:50.365 "claimed": false, 00:04:50.365 "driver_specific": { 00:04:50.365 "passthru": { 00:04:50.365 "base_bdev_name": "Malloc3", 00:04:50.365 "name": "Passthru0" 00:04:50.365 } 00:04:50.365 }, 00:04:50.365 "memory_domains": [ 00:04:50.365 { 00:04:50.365 "dma_device_id": "system", 00:04:50.365 "dma_device_type": 1 00:04:50.365 }, 00:04:50.365 { 00:04:50.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.365 "dma_device_type": 2 00:04:50.365 } 00:04:50.365 ], 00:04:50.365 "name": "Passthru0", 00:04:50.365 "num_blocks": 16384, 00:04:50.365 "product_name": "passthru", 00:04:50.365 "supported_io_types": { 00:04:50.365 "abort": true, 00:04:50.365 "compare": false, 00:04:50.365 "compare_and_write": false, 00:04:50.365 "flush": true, 00:04:50.365 "nvme_admin": false, 00:04:50.365 "nvme_io": false, 00:04:50.365 "read": true, 00:04:50.365 "reset": true, 00:04:50.365 "unmap": true, 00:04:50.365 "write": true, 00:04:50.365 "write_zeroes": true 00:04:50.365 }, 00:04:50.365 "uuid": "a32b1c8d-3c90-588e-be51-4dfd0c8d7e2b", 00:04:50.365 "zoned": false 00:04:50.365 } 00:04:50.365 ]' 00:04:50.365 17:08:20 -- rpc/rpc.sh@21 -- # jq length 00:04:50.365 17:08:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.365 17:08:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.365 17:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 17:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:50.365 17:08:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.365 17:08:20 -- rpc/rpc.sh@26 -- # jq length 00:04:50.365 17:08:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.365 00:04:50.365 real 0m0.315s 00:04:50.365 user 0m0.198s 00:04:50.365 sys 0m0.042s 00:04:50.365 17:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.365 ************************************ 00:04:50.365 END TEST rpc_daemon_integrity 00:04:50.365 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.365 ************************************ 00:04:50.624 17:08:20 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:50.624 17:08:20 -- rpc/rpc.sh@84 -- # killprocess 60472 00:04:50.624 17:08:20 -- common/autotest_common.sh@936 -- # '[' -z 60472 ']' 00:04:50.624 17:08:20 -- common/autotest_common.sh@940 -- # kill -0 60472 00:04:50.624 17:08:20 -- common/autotest_common.sh@941 -- # uname 00:04:50.624 17:08:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:50.624 17:08:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60472 00:04:50.624 17:08:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:50.624 17:08:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:50.624 killing process with pid 60472 00:04:50.624 17:08:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60472' 00:04:50.624 17:08:20 -- common/autotest_common.sh@955 -- # kill 60472 00:04:50.624 17:08:20 -- common/autotest_common.sh@960 -- # wait 60472 00:04:50.883 00:04:50.883 real 0m3.249s 00:04:50.883 user 0m4.447s 00:04:50.883 sys 0m0.749s 00:04:50.883 17:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.883 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.883 ************************************ 00:04:50.883 END TEST rpc 00:04:50.883 ************************************ 00:04:50.883 17:08:20 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.883 17:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.883 17:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.883 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.883 ************************************ 00:04:50.883 START TEST skip_rpc 00:04:50.883 ************************************ 00:04:50.883 17:08:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:50.883 * Looking for test storage... 00:04:50.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.883 17:08:20 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:50.883 17:08:20 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:50.883 17:08:20 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:50.883 17:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.883 17:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.883 17:08:20 -- common/autotest_common.sh@10 -- # set +x 00:04:51.141 ************************************ 00:04:51.141 START TEST skip_rpc 00:04:51.141 ************************************ 00:04:51.141 17:08:20 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:51.141 17:08:20 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60763 00:04:51.141 17:08:20 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.141 17:08:20 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.141 17:08:20 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.141 [2024-04-25 17:08:20.993171] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:51.141 [2024-04-25 17:08:20.993280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60763 ] 00:04:51.400 [2024-04-25 17:08:21.130625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.400 [2024-04-25 17:08:21.181193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.693 17:08:25 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:56.693 17:08:25 -- common/autotest_common.sh@638 -- # local es=0 00:04:56.693 17:08:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:56.693 17:08:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:56.693 17:08:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:56.693 17:08:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:56.693 17:08:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:56.693 17:08:25 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:56.693 17:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.693 17:08:25 -- common/autotest_common.sh@10 -- # set +x 00:04:56.693 2024/04/25 17:08:25 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:56.693 17:08:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:56.693 17:08:25 -- common/autotest_common.sh@641 -- # es=1 00:04:56.693 17:08:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:56.693 17:08:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:56.693 17:08:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:56.693 17:08:25 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:56.693 17:08:25 -- rpc/skip_rpc.sh@23 -- # killprocess 60763 00:04:56.693 17:08:25 -- common/autotest_common.sh@936 -- # '[' -z 60763 ']' 00:04:56.693 17:08:25 -- common/autotest_common.sh@940 -- # kill -0 60763 00:04:56.693 17:08:25 -- common/autotest_common.sh@941 -- # uname 00:04:56.693 17:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.693 17:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60763 00:04:56.693 17:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.693 killing process with pid 60763 00:04:56.693 17:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.693 17:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60763' 00:04:56.693 17:08:25 -- common/autotest_common.sh@955 -- # kill 60763 00:04:56.693 17:08:25 -- common/autotest_common.sh@960 -- # wait 60763 00:04:56.693 00:04:56.693 real 0m5.286s 00:04:56.693 user 0m5.019s 00:04:56.693 sys 0m0.170s 00:04:56.693 17:08:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.693 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.693 ************************************ 00:04:56.693 END TEST skip_rpc 00:04:56.693 ************************************ 00:04:56.693 17:08:26 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.693 17:08:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.694 17:08:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.694 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 ************************************ 00:04:56.694 START TEST skip_rpc_with_json 00:04:56.694 ************************************ 00:04:56.694 17:08:26 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:56.694 17:08:26 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.694 17:08:26 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60860 00:04:56.694 17:08:26 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.694 17:08:26 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.694 17:08:26 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60860 00:04:56.694 17:08:26 -- common/autotest_common.sh@817 -- # '[' -z 60860 ']' 00:04:56.694 17:08:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.694 17:08:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.694 17:08:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.694 17:08:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.694 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.694 [2024-04-25 17:08:26.396792] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:04:56.694 [2024-04-25 17:08:26.396893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:04:56.694 [2024-04-25 17:08:26.532900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.694 [2024-04-25 17:08:26.582912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.999 17:08:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.999 17:08:26 -- common/autotest_common.sh@850 -- # return 0 00:04:56.999 17:08:26 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.999 17:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.999 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 [2024-04-25 17:08:26.738583] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.999 2024/04/25 17:08:26 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:56.999 request: 00:04:56.999 { 00:04:56.999 "method": "nvmf_get_transports", 00:04:56.999 "params": { 00:04:56.999 "trtype": "tcp" 00:04:56.999 } 00:04:56.999 } 00:04:56.999 Got JSON-RPC error response 00:04:56.999 GoRPCClient: error on JSON-RPC call 00:04:56.999 17:08:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:56.999 17:08:26 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.999 17:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.999 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 [2024-04-25 17:08:26.750673] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.999 17:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.999 17:08:26 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.999 17:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:56.999 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 17:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.999 17:08:26 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.999 { 00:04:56.999 "subsystems": [ 00:04:56.999 { 00:04:56.999 "subsystem": "vfio_user_target", 00:04:56.999 "config": null 00:04:56.999 }, 00:04:56.999 { 00:04:56.999 "subsystem": "keyring", 00:04:56.999 "config": [] 00:04:56.999 }, 00:04:56.999 { 00:04:56.999 "subsystem": "iobuf", 00:04:56.999 "config": [ 00:04:56.999 { 00:04:56.999 "method": "iobuf_set_options", 00:04:56.999 "params": { 00:04:56.999 "large_bufsize": 135168, 00:04:56.999 "large_pool_count": 1024, 00:04:56.999 "small_bufsize": 8192, 00:04:56.999 "small_pool_count": 8192 00:04:56.999 } 00:04:56.999 } 00:04:56.999 ] 00:04:56.999 }, 00:04:56.999 { 00:04:56.999 "subsystem": "sock", 00:04:56.999 "config": [ 00:04:56.999 { 00:04:56.999 "method": "sock_impl_set_options", 00:04:56.999 "params": { 00:04:56.999 "enable_ktls": false, 00:04:56.999 "enable_placement_id": 0, 00:04:56.999 "enable_quickack": false, 00:04:56.999 "enable_recv_pipe": true, 00:04:56.999 "enable_zerocopy_send_client": false, 00:04:56.999 "enable_zerocopy_send_server": true, 00:04:56.999 "impl_name": "posix", 00:04:56.999 "recv_buf_size": 2097152, 00:04:56.999 "send_buf_size": 2097152, 00:04:56.999 "tls_version": 0, 00:04:56.999 "zerocopy_threshold": 0 00:04:56.999 } 00:04:56.999 }, 00:04:56.999 { 00:04:56.999 "method": "sock_impl_set_options", 00:04:56.999 "params": { 00:04:57.000 "enable_ktls": false, 00:04:57.000 "enable_placement_id": 0, 00:04:57.000 "enable_quickack": false, 00:04:57.000 "enable_recv_pipe": true, 00:04:57.000 "enable_zerocopy_send_client": false, 00:04:57.000 "enable_zerocopy_send_server": true, 00:04:57.000 "impl_name": "ssl", 00:04:57.000 "recv_buf_size": 4096, 00:04:57.000 "send_buf_size": 4096, 00:04:57.000 "tls_version": 0, 00:04:57.000 "zerocopy_threshold": 0 00:04:57.000 } 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "vmd", 00:04:57.000 "config": [] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "accel", 00:04:57.000 "config": [ 00:04:57.000 { 00:04:57.000 "method": "accel_set_options", 00:04:57.000 "params": { 00:04:57.000 "buf_count": 2048, 00:04:57.000 "large_cache_size": 16, 00:04:57.000 "sequence_count": 2048, 00:04:57.000 "small_cache_size": 128, 00:04:57.000 "task_count": 2048 00:04:57.000 } 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "bdev", 00:04:57.000 "config": [ 00:04:57.000 { 00:04:57.000 "method": "bdev_set_options", 00:04:57.000 "params": { 00:04:57.000 "bdev_auto_examine": true, 00:04:57.000 "bdev_io_cache_size": 256, 00:04:57.000 "bdev_io_pool_size": 65535, 00:04:57.000 "iobuf_large_cache_size": 16, 00:04:57.000 "iobuf_small_cache_size": 128 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "bdev_raid_set_options", 00:04:57.000 "params": { 00:04:57.000 "process_window_size_kb": 1024 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "bdev_iscsi_set_options", 00:04:57.000 "params": { 00:04:57.000 "timeout_sec": 30 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "bdev_nvme_set_options", 00:04:57.000 "params": { 00:04:57.000 "action_on_timeout": "none", 00:04:57.000 "allow_accel_sequence": false, 00:04:57.000 "arbitration_burst": 0, 00:04:57.000 "bdev_retry_count": 3, 00:04:57.000 "ctrlr_loss_timeout_sec": 0, 00:04:57.000 "delay_cmd_submit": true, 00:04:57.000 "dhchap_dhgroups": [ 00:04:57.000 "null", 00:04:57.000 "ffdhe2048", 00:04:57.000 "ffdhe3072", 00:04:57.000 "ffdhe4096", 00:04:57.000 "ffdhe6144", 00:04:57.000 "ffdhe8192" 00:04:57.000 ], 00:04:57.000 "dhchap_digests": [ 00:04:57.000 "sha256", 00:04:57.000 "sha384", 00:04:57.000 "sha512" 00:04:57.000 ], 00:04:57.000 "disable_auto_failback": false, 00:04:57.000 "fast_io_fail_timeout_sec": 0, 00:04:57.000 "generate_uuids": false, 00:04:57.000 "high_priority_weight": 0, 00:04:57.000 "io_path_stat": false, 00:04:57.000 "io_queue_requests": 0, 00:04:57.000 "keep_alive_timeout_ms": 10000, 00:04:57.000 "low_priority_weight": 0, 00:04:57.000 "medium_priority_weight": 0, 00:04:57.000 "nvme_adminq_poll_period_us": 10000, 00:04:57.000 "nvme_error_stat": false, 00:04:57.000 "nvme_ioq_poll_period_us": 0, 00:04:57.000 "rdma_cm_event_timeout_ms": 0, 00:04:57.000 "rdma_max_cq_size": 0, 00:04:57.000 "rdma_srq_size": 0, 00:04:57.000 "reconnect_delay_sec": 0, 00:04:57.000 "timeout_admin_us": 0, 00:04:57.000 "timeout_us": 0, 00:04:57.000 "transport_ack_timeout": 0, 00:04:57.000 "transport_retry_count": 4, 00:04:57.000 "transport_tos": 0 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "bdev_nvme_set_hotplug", 00:04:57.000 "params": { 00:04:57.000 "enable": false, 00:04:57.000 "period_us": 100000 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "bdev_wait_for_examine" 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "scsi", 00:04:57.000 "config": null 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "scheduler", 00:04:57.000 "config": [ 00:04:57.000 { 00:04:57.000 "method": "framework_set_scheduler", 00:04:57.000 "params": { 00:04:57.000 "name": "static" 00:04:57.000 } 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "vhost_scsi", 00:04:57.000 "config": [] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "vhost_blk", 00:04:57.000 "config": [] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "ublk", 00:04:57.000 "config": [] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "nbd", 00:04:57.000 "config": [] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "nvmf", 00:04:57.000 "config": [ 00:04:57.000 { 00:04:57.000 "method": "nvmf_set_config", 00:04:57.000 "params": { 00:04:57.000 "admin_cmd_passthru": { 00:04:57.000 "identify_ctrlr": false 00:04:57.000 }, 00:04:57.000 "discovery_filter": "match_any" 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "nvmf_set_max_subsystems", 00:04:57.000 "params": { 00:04:57.000 "max_subsystems": 1024 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "nvmf_set_crdt", 00:04:57.000 "params": { 00:04:57.000 "crdt1": 0, 00:04:57.000 "crdt2": 0, 00:04:57.000 "crdt3": 0 00:04:57.000 } 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "method": "nvmf_create_transport", 00:04:57.000 "params": { 00:04:57.000 "abort_timeout_sec": 1, 00:04:57.000 "ack_timeout": 0, 00:04:57.000 "buf_cache_size": 4294967295, 00:04:57.000 "c2h_success": true, 00:04:57.000 "data_wr_pool_size": 0, 00:04:57.000 "dif_insert_or_strip": false, 00:04:57.000 "in_capsule_data_size": 4096, 00:04:57.000 "io_unit_size": 131072, 00:04:57.000 "max_aq_depth": 128, 00:04:57.000 "max_io_qpairs_per_ctrlr": 127, 00:04:57.000 "max_io_size": 131072, 00:04:57.000 "max_queue_depth": 128, 00:04:57.000 "num_shared_buffers": 511, 00:04:57.000 "sock_priority": 0, 00:04:57.000 "trtype": "TCP", 00:04:57.000 "zcopy": false 00:04:57.000 } 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 }, 00:04:57.000 { 00:04:57.000 "subsystem": "iscsi", 00:04:57.000 "config": [ 00:04:57.000 { 00:04:57.000 "method": "iscsi_set_options", 00:04:57.000 "params": { 00:04:57.000 "allow_duplicated_isid": false, 00:04:57.000 "chap_group": 0, 00:04:57.000 "data_out_pool_size": 2048, 00:04:57.000 "default_time2retain": 20, 00:04:57.000 "default_time2wait": 2, 00:04:57.000 "disable_chap": false, 00:04:57.000 "error_recovery_level": 0, 00:04:57.000 "first_burst_length": 8192, 00:04:57.000 "immediate_data": true, 00:04:57.000 "immediate_data_pool_size": 16384, 00:04:57.000 "max_connections_per_session": 2, 00:04:57.000 "max_large_datain_per_connection": 64, 00:04:57.000 "max_queue_depth": 64, 00:04:57.000 "max_r2t_per_connection": 4, 00:04:57.000 "max_sessions": 128, 00:04:57.000 "mutual_chap": false, 00:04:57.000 "node_base": "iqn.2016-06.io.spdk", 00:04:57.000 "nop_in_interval": 30, 00:04:57.000 "nop_timeout": 60, 00:04:57.000 "pdu_pool_size": 36864, 00:04:57.000 "require_chap": false 00:04:57.000 } 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 } 00:04:57.000 ] 00:04:57.000 } 00:04:57.000 17:08:26 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.000 17:08:26 -- rpc/skip_rpc.sh@40 -- # killprocess 60860 00:04:57.000 17:08:26 -- common/autotest_common.sh@936 -- # '[' -z 60860 ']' 00:04:57.000 17:08:26 -- common/autotest_common.sh@940 -- # kill -0 60860 00:04:57.000 17:08:26 -- common/autotest_common.sh@941 -- # uname 00:04:57.000 17:08:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.000 17:08:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60860 00:04:57.000 17:08:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.000 17:08:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.000 17:08:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60860' 00:04:57.000 killing process with pid 60860 00:04:57.000 17:08:26 -- common/autotest_common.sh@955 -- # kill 60860 00:04:57.000 17:08:26 -- common/autotest_common.sh@960 -- # wait 60860 00:04:57.258 17:08:27 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60881 00:04:57.258 17:08:27 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.258 17:08:27 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.683 17:08:32 -- rpc/skip_rpc.sh@50 -- # killprocess 60881 00:05:02.683 17:08:32 -- common/autotest_common.sh@936 -- # '[' -z 60881 ']' 00:05:02.683 17:08:32 -- common/autotest_common.sh@940 -- # kill -0 60881 00:05:02.683 17:08:32 -- common/autotest_common.sh@941 -- # uname 00:05:02.683 17:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.683 17:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60881 00:05:02.683 17:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.683 17:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.683 killing process with pid 60881 00:05:02.683 17:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60881' 00:05:02.683 17:08:32 -- common/autotest_common.sh@955 -- # kill 60881 00:05:02.683 17:08:32 -- common/autotest_common.sh@960 -- # wait 60881 00:05:02.683 17:08:32 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.683 17:08:32 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.683 00:05:02.683 real 0m6.163s 00:05:02.683 user 0m5.918s 00:05:02.683 sys 0m0.390s 00:05:02.683 17:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.683 17:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.683 ************************************ 00:05:02.683 END TEST skip_rpc_with_json 00:05:02.683 ************************************ 00:05:02.683 17:08:32 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.683 17:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.683 17:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.683 17:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.683 ************************************ 00:05:02.683 START TEST skip_rpc_with_delay 00:05:02.683 ************************************ 00:05:02.683 17:08:32 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:02.683 17:08:32 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.683 17:08:32 -- common/autotest_common.sh@638 -- # local es=0 00:05:02.683 17:08:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.683 17:08:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.683 17:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:02.683 17:08:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.683 17:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:02.683 17:08:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.683 17:08:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:02.683 17:08:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.683 17:08:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.683 17:08:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.941 [2024-04-25 17:08:32.669769] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:02.941 [2024-04-25 17:08:32.669892] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:02.941 17:08:32 -- common/autotest_common.sh@641 -- # es=1 00:05:02.941 17:08:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:02.941 17:08:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:02.941 17:08:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:02.941 00:05:02.941 real 0m0.086s 00:05:02.941 user 0m0.059s 00:05:02.941 sys 0m0.025s 00:05:02.941 17:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.941 17:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.941 ************************************ 00:05:02.941 END TEST skip_rpc_with_delay 00:05:02.941 ************************************ 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@77 -- # uname 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:02.941 17:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.941 17:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.941 17:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.941 ************************************ 00:05:02.941 START TEST exit_on_failed_rpc_init 00:05:02.941 ************************************ 00:05:02.941 17:08:32 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61005 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.941 17:08:32 -- rpc/skip_rpc.sh@63 -- # waitforlisten 61005 00:05:02.942 17:08:32 -- common/autotest_common.sh@817 -- # '[' -z 61005 ']' 00:05:02.942 17:08:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.942 17:08:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.942 17:08:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.942 17:08:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.942 17:08:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.942 [2024-04-25 17:08:32.866279] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:02.942 [2024-04-25 17:08:32.866384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:05:03.200 [2024-04-25 17:08:32.998284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.200 [2024-04-25 17:08:33.049158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.458 17:08:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.458 17:08:33 -- common/autotest_common.sh@850 -- # return 0 00:05:03.458 17:08:33 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.458 17:08:33 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.458 17:08:33 -- common/autotest_common.sh@638 -- # local es=0 00:05:03.458 17:08:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.458 17:08:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.458 17:08:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:03.458 17:08:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.458 17:08:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:03.458 17:08:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.458 17:08:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:03.458 17:08:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.458 17:08:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:03.458 17:08:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.458 [2024-04-25 17:08:33.265825] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:03.458 [2024-04-25 17:08:33.266363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61016 ] 00:05:03.458 [2024-04-25 17:08:33.406569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.717 [2024-04-25 17:08:33.474469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.717 [2024-04-25 17:08:33.474591] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.717 [2024-04-25 17:08:33.474608] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.717 [2024-04-25 17:08:33.474619] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.717 17:08:33 -- common/autotest_common.sh@641 -- # es=234 00:05:03.717 17:08:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:03.717 17:08:33 -- common/autotest_common.sh@650 -- # es=106 00:05:03.717 17:08:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:03.717 17:08:33 -- common/autotest_common.sh@658 -- # es=1 00:05:03.717 17:08:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:03.717 17:08:33 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.717 17:08:33 -- rpc/skip_rpc.sh@70 -- # killprocess 61005 00:05:03.717 17:08:33 -- common/autotest_common.sh@936 -- # '[' -z 61005 ']' 00:05:03.717 17:08:33 -- common/autotest_common.sh@940 -- # kill -0 61005 00:05:03.717 17:08:33 -- common/autotest_common.sh@941 -- # uname 00:05:03.717 17:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.717 17:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61005 00:05:03.717 17:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.717 killing process with pid 61005 00:05:03.717 17:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.717 17:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61005' 00:05:03.717 17:08:33 -- common/autotest_common.sh@955 -- # kill 61005 00:05:03.717 17:08:33 -- common/autotest_common.sh@960 -- # wait 61005 00:05:03.976 00:05:03.976 real 0m1.065s 00:05:03.976 user 0m1.292s 00:05:03.976 sys 0m0.255s 00:05:03.976 17:08:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.976 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.976 ************************************ 00:05:03.976 END TEST exit_on_failed_rpc_init 00:05:03.976 ************************************ 00:05:03.976 17:08:33 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.976 00:05:03.976 real 0m13.139s 00:05:03.976 user 0m12.478s 00:05:03.976 sys 0m1.132s 00:05:03.976 17:08:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.976 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.976 ************************************ 00:05:03.976 END TEST skip_rpc 00:05:03.976 ************************************ 00:05:03.976 17:08:33 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.976 17:08:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.976 17:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.976 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 ************************************ 00:05:04.235 START TEST rpc_client 00:05:04.235 ************************************ 00:05:04.235 17:08:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.235 * Looking for test storage... 00:05:04.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:04.235 17:08:34 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:04.235 OK 00:05:04.235 17:08:34 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.235 00:05:04.235 real 0m0.110s 00:05:04.235 user 0m0.047s 00:05:04.235 sys 0m0.068s 00:05:04.235 17:08:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.235 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.235 ************************************ 00:05:04.235 END TEST rpc_client 00:05:04.235 ************************************ 00:05:04.235 17:08:34 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.235 17:08:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.235 17:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.235 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.494 ************************************ 00:05:04.494 START TEST json_config 00:05:04.494 ************************************ 00:05:04.494 17:08:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.494 17:08:34 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.494 17:08:34 -- nvmf/common.sh@7 -- # uname -s 00:05:04.494 17:08:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.494 17:08:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.494 17:08:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.494 17:08:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.494 17:08:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.494 17:08:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.494 17:08:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.494 17:08:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.494 17:08:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.494 17:08:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.494 17:08:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:05:04.494 17:08:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:05:04.494 17:08:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.494 17:08:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.494 17:08:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.494 17:08:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.494 17:08:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.494 17:08:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.494 17:08:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.494 17:08:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.494 17:08:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.494 17:08:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.494 17:08:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.494 17:08:34 -- paths/export.sh@5 -- # export PATH 00:05:04.494 17:08:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.494 17:08:34 -- nvmf/common.sh@47 -- # : 0 00:05:04.494 17:08:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:04.494 17:08:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:04.494 17:08:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.494 17:08:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.494 17:08:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.494 17:08:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:04.494 17:08:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:04.494 17:08:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:04.494 17:08:34 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.494 17:08:34 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.494 17:08:34 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.494 17:08:34 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.494 17:08:34 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.494 17:08:34 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:04.494 17:08:34 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:04.494 17:08:34 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:04.494 17:08:34 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:04.494 17:08:34 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:04.494 17:08:34 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:04.494 17:08:34 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:04.494 17:08:34 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:04.494 17:08:34 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:04.494 17:08:34 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:04.494 INFO: JSON configuration test init 00:05:04.494 17:08:34 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:04.494 17:08:34 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:04.494 17:08:34 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:04.494 17:08:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:04.494 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.494 17:08:34 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:04.494 17:08:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:04.494 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.494 17:08:34 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:04.494 17:08:34 -- json_config/common.sh@9 -- # local app=target 00:05:04.494 17:08:34 -- json_config/common.sh@10 -- # shift 00:05:04.494 17:08:34 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.494 17:08:34 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.494 17:08:34 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.494 17:08:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.494 17:08:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.494 17:08:34 -- json_config/common.sh@22 -- # app_pid["$app"]=61144 00:05:04.494 Waiting for target to run... 00:05:04.494 17:08:34 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.494 17:08:34 -- json_config/common.sh@25 -- # waitforlisten 61144 /var/tmp/spdk_tgt.sock 00:05:04.494 17:08:34 -- common/autotest_common.sh@817 -- # '[' -z 61144 ']' 00:05:04.494 17:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.494 17:08:34 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:04.494 17:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:04.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.494 17:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.495 17:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:04.495 17:08:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.495 [2024-04-25 17:08:34.419897] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:04.495 [2024-04-25 17:08:34.419993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:05:04.753 [2024-04-25 17:08:34.722891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.011 [2024-04-25 17:08:34.776063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.578 17:08:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.578 00:05:05.578 17:08:35 -- common/autotest_common.sh@850 -- # return 0 00:05:05.578 17:08:35 -- json_config/common.sh@26 -- # echo '' 00:05:05.578 17:08:35 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:05.578 17:08:35 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:05.578 17:08:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:05.578 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:05:05.578 17:08:35 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:05.578 17:08:35 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:05.578 17:08:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:05.578 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:05:05.578 17:08:35 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:05.578 17:08:35 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:05.578 17:08:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:06.147 17:08:35 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:06.147 17:08:35 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:06.147 17:08:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.147 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:05:06.147 17:08:35 -- json_config/json_config.sh@45 -- # local ret=0 00:05:06.147 17:08:35 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:06.147 17:08:35 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:06.147 17:08:35 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:06.147 17:08:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:06.147 17:08:35 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:06.147 17:08:36 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:06.147 17:08:36 -- json_config/json_config.sh@48 -- # local get_types 00:05:06.147 17:08:36 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:06.147 17:08:36 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:06.147 17:08:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.147 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.407 17:08:36 -- json_config/json_config.sh@55 -- # return 0 00:05:06.407 17:08:36 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:06.407 17:08:36 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:06.407 17:08:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.407 17:08:36 -- common/autotest_common.sh@10 -- # set +x 00:05:06.407 17:08:36 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:06.407 17:08:36 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:06.407 17:08:36 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.407 17:08:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:06.666 MallocForNvmf0 00:05:06.666 17:08:36 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.666 17:08:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:06.924 MallocForNvmf1 00:05:06.924 17:08:36 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.925 17:08:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:06.925 [2024-04-25 17:08:36.892172] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.183 17:08:36 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.183 17:08:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:07.183 17:08:37 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.183 17:08:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:07.441 17:08:37 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.441 17:08:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:07.701 17:08:37 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.701 17:08:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:07.959 [2024-04-25 17:08:37.808653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.959 17:08:37 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:07.959 17:08:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:07.959 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.959 17:08:37 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:07.959 17:08:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:07.959 17:08:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.959 17:08:37 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:07.959 17:08:37 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:07.959 17:08:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:08.219 MallocBdevForConfigChangeCheck 00:05:08.219 17:08:38 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:08.219 17:08:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:08.219 17:08:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.478 17:08:38 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:08.478 17:08:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.738 INFO: shutting down applications... 00:05:08.738 17:08:38 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:08.738 17:08:38 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:08.738 17:08:38 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:08.738 17:08:38 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:08.738 17:08:38 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.997 Calling clear_iscsi_subsystem 00:05:08.997 Calling clear_nvmf_subsystem 00:05:08.997 Calling clear_nbd_subsystem 00:05:08.997 Calling clear_ublk_subsystem 00:05:08.997 Calling clear_vhost_blk_subsystem 00:05:08.997 Calling clear_vhost_scsi_subsystem 00:05:08.997 Calling clear_bdev_subsystem 00:05:08.997 17:08:38 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:08.997 17:08:38 -- json_config/json_config.sh@343 -- # count=100 00:05:08.997 17:08:38 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:08.997 17:08:38 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.997 17:08:38 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.997 17:08:38 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:09.590 17:08:39 -- json_config/json_config.sh@345 -- # break 00:05:09.590 17:08:39 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:09.590 17:08:39 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:09.590 17:08:39 -- json_config/common.sh@31 -- # local app=target 00:05:09.590 17:08:39 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.590 17:08:39 -- json_config/common.sh@35 -- # [[ -n 61144 ]] 00:05:09.590 17:08:39 -- json_config/common.sh@38 -- # kill -SIGINT 61144 00:05:09.590 17:08:39 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.590 17:08:39 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.590 17:08:39 -- json_config/common.sh@41 -- # kill -0 61144 00:05:09.590 17:08:39 -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.158 17:08:39 -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.158 17:08:39 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.158 17:08:39 -- json_config/common.sh@41 -- # kill -0 61144 00:05:10.158 17:08:39 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.158 17:08:39 -- json_config/common.sh@43 -- # break 00:05:10.158 17:08:39 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.158 SPDK target shutdown done 00:05:10.158 17:08:39 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.158 INFO: relaunching applications... 00:05:10.158 17:08:39 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:10.158 17:08:39 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.158 17:08:39 -- json_config/common.sh@9 -- # local app=target 00:05:10.158 17:08:39 -- json_config/common.sh@10 -- # shift 00:05:10.158 17:08:39 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.158 17:08:39 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.158 17:08:39 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.158 17:08:39 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.158 17:08:39 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.158 17:08:39 -- json_config/common.sh@22 -- # app_pid["$app"]=61424 00:05:10.158 Waiting for target to run... 00:05:10.158 17:08:39 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.158 17:08:39 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.158 17:08:39 -- json_config/common.sh@25 -- # waitforlisten 61424 /var/tmp/spdk_tgt.sock 00:05:10.158 17:08:39 -- common/autotest_common.sh@817 -- # '[' -z 61424 ']' 00:05:10.158 17:08:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.158 17:08:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.158 17:08:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.158 17:08:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.158 17:08:39 -- common/autotest_common.sh@10 -- # set +x 00:05:10.158 [2024-04-25 17:08:39.949982] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:10.158 [2024-04-25 17:08:39.950110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61424 ] 00:05:10.417 [2024-04-25 17:08:40.225396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.417 [2024-04-25 17:08:40.262520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.677 [2024-04-25 17:08:40.550044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.677 [2024-04-25 17:08:40.582122] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:10.936 17:08:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.936 17:08:40 -- common/autotest_common.sh@850 -- # return 0 00:05:10.936 00:05:10.936 17:08:40 -- json_config/common.sh@26 -- # echo '' 00:05:10.936 17:08:40 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:10.936 17:08:40 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:10.936 INFO: Checking if target configuration is the same... 00:05:10.936 17:08:40 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.936 17:08:40 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:10.937 17:08:40 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.937 + '[' 2 -ne 2 ']' 00:05:10.937 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:10.937 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:10.937 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:10.937 +++ basename /dev/fd/62 00:05:11.226 ++ mktemp /tmp/62.XXX 00:05:11.226 + tmp_file_1=/tmp/62.LuI 00:05:11.226 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.226 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.226 + tmp_file_2=/tmp/spdk_tgt_config.json.6WU 00:05:11.226 + ret=0 00:05:11.226 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.485 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:11.485 + diff -u /tmp/62.LuI /tmp/spdk_tgt_config.json.6WU 00:05:11.485 INFO: JSON config files are the same 00:05:11.485 + echo 'INFO: JSON config files are the same' 00:05:11.485 + rm /tmp/62.LuI /tmp/spdk_tgt_config.json.6WU 00:05:11.485 + exit 0 00:05:11.485 17:08:41 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:11.485 INFO: changing configuration and checking if this can be detected... 00:05:11.485 17:08:41 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:11.485 17:08:41 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.485 17:08:41 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:11.744 17:08:41 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.745 17:08:41 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:11.745 17:08:41 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.745 + '[' 2 -ne 2 ']' 00:05:11.745 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:11.745 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:11.745 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:11.745 +++ basename /dev/fd/62 00:05:11.745 ++ mktemp /tmp/62.XXX 00:05:11.745 + tmp_file_1=/tmp/62.slL 00:05:11.745 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:11.745 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:11.745 + tmp_file_2=/tmp/spdk_tgt_config.json.Nex 00:05:11.745 + ret=0 00:05:11.745 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.312 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:12.312 + diff -u /tmp/62.slL /tmp/spdk_tgt_config.json.Nex 00:05:12.312 + ret=1 00:05:12.312 + echo '=== Start of file: /tmp/62.slL ===' 00:05:12.312 + cat /tmp/62.slL 00:05:12.312 + echo '=== End of file: /tmp/62.slL ===' 00:05:12.312 + echo '' 00:05:12.312 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Nex ===' 00:05:12.312 + cat /tmp/spdk_tgt_config.json.Nex 00:05:12.312 + echo '=== End of file: /tmp/spdk_tgt_config.json.Nex ===' 00:05:12.312 + echo '' 00:05:12.312 + rm /tmp/62.slL /tmp/spdk_tgt_config.json.Nex 00:05:12.312 + exit 1 00:05:12.312 INFO: configuration change detected. 00:05:12.312 17:08:42 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:12.312 17:08:42 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:12.312 17:08:42 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:12.312 17:08:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:12.312 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.312 17:08:42 -- json_config/json_config.sh@307 -- # local ret=0 00:05:12.312 17:08:42 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:12.312 17:08:42 -- json_config/json_config.sh@317 -- # [[ -n 61424 ]] 00:05:12.312 17:08:42 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:12.312 17:08:42 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:12.312 17:08:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:12.312 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.312 17:08:42 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:12.312 17:08:42 -- json_config/json_config.sh@193 -- # uname -s 00:05:12.312 17:08:42 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:12.312 17:08:42 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:12.312 17:08:42 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:12.312 17:08:42 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:12.312 17:08:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:12.312 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.312 17:08:42 -- json_config/json_config.sh@323 -- # killprocess 61424 00:05:12.312 17:08:42 -- common/autotest_common.sh@936 -- # '[' -z 61424 ']' 00:05:12.312 17:08:42 -- common/autotest_common.sh@940 -- # kill -0 61424 00:05:12.312 17:08:42 -- common/autotest_common.sh@941 -- # uname 00:05:12.312 17:08:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.312 17:08:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61424 00:05:12.312 17:08:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.312 17:08:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.312 killing process with pid 61424 00:05:12.312 17:08:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61424' 00:05:12.312 17:08:42 -- common/autotest_common.sh@955 -- # kill 61424 00:05:12.312 17:08:42 -- common/autotest_common.sh@960 -- # wait 61424 00:05:12.572 17:08:42 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:12.572 17:08:42 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:12.572 17:08:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:12.572 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.572 17:08:42 -- json_config/json_config.sh@328 -- # return 0 00:05:12.572 INFO: Success 00:05:12.572 17:08:42 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:12.572 00:05:12.572 real 0m8.119s 00:05:12.572 user 0m11.792s 00:05:12.572 sys 0m1.517s 00:05:12.572 17:08:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.572 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.572 ************************************ 00:05:12.572 END TEST json_config 00:05:12.572 ************************************ 00:05:12.572 17:08:42 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.572 17:08:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.572 17:08:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.572 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.572 ************************************ 00:05:12.572 START TEST json_config_extra_key 00:05:12.572 ************************************ 00:05:12.572 17:08:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.831 17:08:42 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.831 17:08:42 -- nvmf/common.sh@7 -- # uname -s 00:05:12.831 17:08:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.831 17:08:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.831 17:08:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.831 17:08:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.831 17:08:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.831 17:08:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.831 17:08:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.831 17:08:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.831 17:08:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.831 17:08:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.831 17:08:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:05:12.831 17:08:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:05:12.831 17:08:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.831 17:08:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.831 17:08:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.831 17:08:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.831 17:08:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.831 17:08:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.831 17:08:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.831 17:08:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.831 17:08:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.831 17:08:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.831 17:08:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.831 17:08:42 -- paths/export.sh@5 -- # export PATH 00:05:12.831 17:08:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.832 17:08:42 -- nvmf/common.sh@47 -- # : 0 00:05:12.832 17:08:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:12.832 17:08:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:12.832 17:08:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.832 17:08:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.832 17:08:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.832 17:08:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:12.832 17:08:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:12.832 17:08:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.832 INFO: launching applications... 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.832 17:08:42 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.832 17:08:42 -- json_config/common.sh@9 -- # local app=target 00:05:12.832 17:08:42 -- json_config/common.sh@10 -- # shift 00:05:12.832 17:08:42 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.832 17:08:42 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.832 17:08:42 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.832 17:08:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.832 17:08:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.832 17:08:42 -- json_config/common.sh@22 -- # app_pid["$app"]=61594 00:05:12.832 Waiting for target to run... 00:05:12.832 17:08:42 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.832 17:08:42 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.832 17:08:42 -- json_config/common.sh@25 -- # waitforlisten 61594 /var/tmp/spdk_tgt.sock 00:05:12.832 17:08:42 -- common/autotest_common.sh@817 -- # '[' -z 61594 ']' 00:05:12.832 17:08:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.832 17:08:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.832 17:08:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.832 17:08:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.832 17:08:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.832 [2024-04-25 17:08:42.643658] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:12.832 [2024-04-25 17:08:42.643813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61594 ] 00:05:13.091 [2024-04-25 17:08:42.950116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.091 [2024-04-25 17:08:42.988989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.657 17:08:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.657 17:08:43 -- common/autotest_common.sh@850 -- # return 0 00:05:13.657 00:05:13.657 17:08:43 -- json_config/common.sh@26 -- # echo '' 00:05:13.657 INFO: shutting down applications... 00:05:13.657 17:08:43 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.657 17:08:43 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.657 17:08:43 -- json_config/common.sh@31 -- # local app=target 00:05:13.657 17:08:43 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.657 17:08:43 -- json_config/common.sh@35 -- # [[ -n 61594 ]] 00:05:13.657 17:08:43 -- json_config/common.sh@38 -- # kill -SIGINT 61594 00:05:13.657 17:08:43 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.657 17:08:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.657 17:08:43 -- json_config/common.sh@41 -- # kill -0 61594 00:05:13.657 17:08:43 -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.223 17:08:44 -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.223 17:08:44 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.223 17:08:44 -- json_config/common.sh@41 -- # kill -0 61594 00:05:14.223 17:08:44 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.223 17:08:44 -- json_config/common.sh@43 -- # break 00:05:14.223 17:08:44 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.223 SPDK target shutdown done 00:05:14.223 17:08:44 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.223 Success 00:05:14.223 17:08:44 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:14.223 00:05:14.223 real 0m1.584s 00:05:14.223 user 0m1.418s 00:05:14.223 sys 0m0.305s 00:05:14.223 17:08:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.223 17:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.223 ************************************ 00:05:14.223 END TEST json_config_extra_key 00:05:14.223 ************************************ 00:05:14.223 17:08:44 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.223 17:08:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.223 17:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.223 17:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.223 ************************************ 00:05:14.223 START TEST alias_rpc 00:05:14.223 ************************************ 00:05:14.223 17:08:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:14.482 * Looking for test storage... 00:05:14.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:14.482 17:08:44 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:14.482 17:08:44 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61681 00:05:14.482 17:08:44 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61681 00:05:14.482 17:08:44 -- common/autotest_common.sh@817 -- # '[' -z 61681 ']' 00:05:14.482 17:08:44 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.482 17:08:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.482 17:08:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.482 17:08:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.482 17:08:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.482 17:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.483 [2024-04-25 17:08:44.326948] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:14.483 [2024-04-25 17:08:44.327058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61681 ] 00:05:14.483 [2024-04-25 17:08:44.457791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.741 [2024-04-25 17:08:44.512276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.307 17:08:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.307 17:08:45 -- common/autotest_common.sh@850 -- # return 0 00:05:15.307 17:08:45 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.565 17:08:45 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61681 00:05:15.565 17:08:45 -- common/autotest_common.sh@936 -- # '[' -z 61681 ']' 00:05:15.565 17:08:45 -- common/autotest_common.sh@940 -- # kill -0 61681 00:05:15.565 17:08:45 -- common/autotest_common.sh@941 -- # uname 00:05:15.565 17:08:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.565 17:08:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61681 00:05:15.565 17:08:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.565 17:08:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.565 killing process with pid 61681 00:05:15.565 17:08:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61681' 00:05:15.565 17:08:45 -- common/autotest_common.sh@955 -- # kill 61681 00:05:15.565 17:08:45 -- common/autotest_common.sh@960 -- # wait 61681 00:05:16.133 00:05:16.133 real 0m1.622s 00:05:16.133 user 0m1.942s 00:05:16.133 sys 0m0.314s 00:05:16.133 17:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.133 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.133 ************************************ 00:05:16.133 END TEST alias_rpc 00:05:16.133 ************************************ 00:05:16.133 17:08:45 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:05:16.133 17:08:45 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.133 17:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.133 17:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.133 17:08:45 -- common/autotest_common.sh@10 -- # set +x 00:05:16.133 ************************************ 00:05:16.133 START TEST dpdk_mem_utility 00:05:16.133 ************************************ 00:05:16.133 17:08:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.133 * Looking for test storage... 00:05:16.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.133 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.133 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61777 00:05:16.133 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.133 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61777 00:05:16.133 17:08:46 -- common/autotest_common.sh@817 -- # '[' -z 61777 ']' 00:05:16.133 17:08:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.133 17:08:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.133 17:08:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.133 17:08:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.133 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.133 [2024-04-25 17:08:46.062054] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:16.133 [2024-04-25 17:08:46.062144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61777 ] 00:05:16.392 [2024-04-25 17:08:46.193777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.392 [2024-04-25 17:08:46.243016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.652 17:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.652 17:08:46 -- common/autotest_common.sh@850 -- # return 0 00:05:16.652 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.652 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.652 17:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.652 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.652 { 00:05:16.652 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.652 } 00:05:16.652 17:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.652 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.652 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:16.652 1 heaps totaling size 814.000000 MiB 00:05:16.652 size: 814.000000 MiB heap id: 0 00:05:16.652 end heaps---------- 00:05:16.652 8 mempools totaling size 598.116089 MiB 00:05:16.652 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.652 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.652 size: 84.521057 MiB name: bdev_io_61777 00:05:16.652 size: 51.011292 MiB name: evtpool_61777 00:05:16.652 size: 50.003479 MiB name: msgpool_61777 00:05:16.652 size: 21.763794 MiB name: PDU_Pool 00:05:16.652 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.652 size: 0.026123 MiB name: Session_Pool 00:05:16.652 end mempools------- 00:05:16.652 6 memzones totaling size 4.142822 MiB 00:05:16.652 size: 1.000366 MiB name: RG_ring_0_61777 00:05:16.652 size: 1.000366 MiB name: RG_ring_1_61777 00:05:16.652 size: 1.000366 MiB name: RG_ring_4_61777 00:05:16.652 size: 1.000366 MiB name: RG_ring_5_61777 00:05:16.652 size: 0.125366 MiB name: RG_ring_2_61777 00:05:16.652 size: 0.015991 MiB name: RG_ring_3_61777 00:05:16.652 end memzones------- 00:05:16.652 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.652 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:05:16.652 list of free elements. size: 12.487671 MiB 00:05:16.652 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:16.652 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:16.652 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:16.652 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:16.652 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:16.652 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:16.652 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:16.652 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:16.652 element at address: 0x200000200000 with size: 0.837036 MiB 00:05:16.652 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:05:16.652 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:16.652 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:16.652 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:16.652 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:16.652 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:16.652 list of standard malloc elements. size: 199.249756 MiB 00:05:16.652 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:16.652 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:16.652 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.652 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:16.652 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:16.652 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.652 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:16.652 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.652 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:16.652 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.652 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.652 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:16.652 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:16.652 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:16.652 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:16.652 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:16.653 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:16.653 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:16.654 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:16.654 list of memzone associated elements. size: 602.262573 MiB 00:05:16.654 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:16.654 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.654 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:16.654 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.654 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:16.654 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61777_0 00:05:16.654 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:16.654 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61777_0 00:05:16.654 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:16.654 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61777_0 00:05:16.654 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:16.654 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.654 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:16.654 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.654 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:16.654 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61777 00:05:16.654 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:16.654 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61777 00:05:16.654 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.654 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61777 00:05:16.654 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:16.654 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.654 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:16.654 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.654 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:16.654 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.654 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:16.654 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.654 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:16.654 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61777 00:05:16.654 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:16.654 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61777 00:05:16.654 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:16.654 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61777 00:05:16.654 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:16.654 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61777 00:05:16.654 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:16.654 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61777 00:05:16.654 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:16.654 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.654 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:16.654 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.654 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:16.654 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.654 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:16.654 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61777 00:05:16.654 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:16.654 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.654 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:16.654 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.654 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:16.654 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61777 00:05:16.654 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:16.654 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.654 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:16.654 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61777 00:05:16.654 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:16.654 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61777 00:05:16.654 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:16.654 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.654 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.654 17:08:46 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61777 00:05:16.654 17:08:46 -- common/autotest_common.sh@936 -- # '[' -z 61777 ']' 00:05:16.654 17:08:46 -- common/autotest_common.sh@940 -- # kill -0 61777 00:05:16.654 17:08:46 -- common/autotest_common.sh@941 -- # uname 00:05:16.654 17:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.654 17:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61777 00:05:16.654 17:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.654 17:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.654 killing process with pid 61777 00:05:16.654 17:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61777' 00:05:16.654 17:08:46 -- common/autotest_common.sh@955 -- # kill 61777 00:05:16.654 17:08:46 -- common/autotest_common.sh@960 -- # wait 61777 00:05:16.913 00:05:16.913 real 0m0.902s 00:05:16.913 user 0m0.984s 00:05:16.913 sys 0m0.243s 00:05:16.913 17:08:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.913 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.913 ************************************ 00:05:16.913 END TEST dpdk_mem_utility 00:05:16.913 ************************************ 00:05:16.913 17:08:46 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:16.913 17:08:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.913 17:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.913 17:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:17.171 ************************************ 00:05:17.171 START TEST event 00:05:17.171 ************************************ 00:05:17.171 17:08:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.171 * Looking for test storage... 00:05:17.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.171 17:08:47 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:17.171 17:08:47 -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.171 17:08:47 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.171 17:08:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:17.171 17:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.171 17:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.171 ************************************ 00:05:17.171 START TEST event_perf 00:05:17.171 ************************************ 00:05:17.171 17:08:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.171 Running I/O for 1 seconds...[2024-04-25 17:08:47.113225] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:17.171 [2024-04-25 17:08:47.113324] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ] 00:05:17.430 [2024-04-25 17:08:47.250142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.430 [2024-04-25 17:08:47.302260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.430 [2024-04-25 17:08:47.302389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.430 [2024-04-25 17:08:47.302519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.430 [2024-04-25 17:08:47.302535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.398 Running I/O for 1 seconds... 00:05:18.398 lcore 0: 200948 00:05:18.398 lcore 1: 200947 00:05:18.398 lcore 2: 200946 00:05:18.398 lcore 3: 200946 00:05:18.656 done. 00:05:18.656 00:05:18.656 real 0m1.285s 00:05:18.656 user 0m4.122s 00:05:18.656 sys 0m0.046s 00:05:18.656 17:08:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.656 17:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.656 ************************************ 00:05:18.656 END TEST event_perf 00:05:18.656 ************************************ 00:05:18.656 17:08:48 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.656 17:08:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:18.656 17:08:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.656 17:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.656 ************************************ 00:05:18.656 START TEST event_reactor 00:05:18.656 ************************************ 00:05:18.656 17:08:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:18.656 [2024-04-25 17:08:48.509813] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:18.656 [2024-04-25 17:08:48.509901] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61902 ] 00:05:18.915 [2024-04-25 17:08:48.645649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.915 [2024-04-25 17:08:48.694891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.849 test_start 00:05:19.849 oneshot 00:05:19.849 tick 100 00:05:19.849 tick 100 00:05:19.849 tick 250 00:05:19.849 tick 100 00:05:19.849 tick 100 00:05:19.849 tick 250 00:05:19.849 tick 500 00:05:19.849 tick 100 00:05:19.849 tick 100 00:05:19.849 tick 100 00:05:19.849 tick 250 00:05:19.849 tick 100 00:05:19.849 tick 100 00:05:19.849 test_end 00:05:19.849 00:05:19.849 real 0m1.281s 00:05:19.849 user 0m1.142s 00:05:19.849 sys 0m0.033s 00:05:19.849 17:08:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.849 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.849 ************************************ 00:05:19.849 END TEST event_reactor 00:05:19.849 ************************************ 00:05:19.849 17:08:49 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.849 17:08:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:19.849 17:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.849 17:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:20.108 ************************************ 00:05:20.108 START TEST event_reactor_perf 00:05:20.108 ************************************ 00:05:20.108 17:08:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.108 [2024-04-25 17:08:49.900168] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:20.108 [2024-04-25 17:08:49.900249] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:05:20.108 [2024-04-25 17:08:50.036611] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.367 [2024-04-25 17:08:50.093359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.302 test_start 00:05:21.302 test_end 00:05:21.302 Performance: 443733 events per second 00:05:21.302 00:05:21.302 real 0m1.305s 00:05:21.302 user 0m1.158s 00:05:21.302 sys 0m0.042s 00:05:21.302 17:08:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.302 17:08:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.302 ************************************ 00:05:21.302 END TEST event_reactor_perf 00:05:21.302 ************************************ 00:05:21.302 17:08:51 -- event/event.sh@49 -- # uname -s 00:05:21.302 17:08:51 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.302 17:08:51 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.302 17:08:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.302 17:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.302 17:08:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.561 ************************************ 00:05:21.561 START TEST event_scheduler 00:05:21.561 ************************************ 00:05:21.561 17:08:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.561 * Looking for test storage... 00:05:21.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.561 17:08:51 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.561 17:08:51 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62009 00:05:21.561 17:08:51 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.561 17:08:51 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.561 17:08:51 -- scheduler/scheduler.sh@37 -- # waitforlisten 62009 00:05:21.561 17:08:51 -- common/autotest_common.sh@817 -- # '[' -z 62009 ']' 00:05:21.561 17:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.561 17:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.561 17:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.561 17:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.561 17:08:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.561 [2024-04-25 17:08:51.448567] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:21.561 [2024-04-25 17:08:51.448691] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62009 ] 00:05:21.821 [2024-04-25 17:08:51.589075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.821 [2024-04-25 17:08:51.658774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.821 [2024-04-25 17:08:51.658930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.821 [2024-04-25 17:08:51.659017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.821 [2024-04-25 17:08:51.659274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.755 17:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.755 17:08:52 -- common/autotest_common.sh@850 -- # return 0 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 POWER: Env isn't set yet! 00:05:22.755 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:22.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.755 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.755 POWER: Attempting to initialise PSTAT power management... 00:05:22.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.755 POWER: Cannot set governor of lcore 0 to performance 00:05:22.755 POWER: Attempting to initialise AMD PSTATE power management... 00:05:22.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.755 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.755 POWER: Attempting to initialise CPPC power management... 00:05:22.755 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.755 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.755 POWER: Attempting to initialise VM power management... 00:05:22.755 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.755 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.755 [2024-04-25 17:08:52.400310] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.755 [2024-04-25 17:08:52.400325] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:22.755 [2024-04-25 17:08:52.400334] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 [2024-04-25 17:08:52.451291] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.755 17:08:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.755 17:08:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 ************************************ 00:05:22.755 START TEST scheduler_create_thread 00:05:22.755 ************************************ 00:05:22.755 17:08:52 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 2 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 3 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 4 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 5 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 6 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 7 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 8 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.755 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.755 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.755 9 00:05:22.755 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.755 17:08:52 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.756 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.756 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 10 00:05:22.756 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.756 17:08:52 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.756 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.756 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.756 17:08:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.756 17:08:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.756 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.756 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.756 17:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.756 17:08:52 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.756 17:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:22.756 17:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.131 17:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.131 17:08:54 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.131 17:08:54 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.131 17:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.131 17:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:25.508 ************************************ 00:05:25.508 END TEST scheduler_create_thread 00:05:25.508 ************************************ 00:05:25.508 17:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:25.508 00:05:25.508 real 0m2.614s 00:05:25.508 user 0m0.018s 00:05:25.508 sys 0m0.007s 00:05:25.508 17:08:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.508 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:05:25.508 17:08:55 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.508 17:08:55 -- scheduler/scheduler.sh@46 -- # killprocess 62009 00:05:25.508 17:08:55 -- common/autotest_common.sh@936 -- # '[' -z 62009 ']' 00:05:25.508 17:08:55 -- common/autotest_common.sh@940 -- # kill -0 62009 00:05:25.508 17:08:55 -- common/autotest_common.sh@941 -- # uname 00:05:25.508 17:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.508 17:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62009 00:05:25.508 17:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:25.508 17:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:25.508 17:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62009' 00:05:25.508 killing process with pid 62009 00:05:25.508 17:08:55 -- common/autotest_common.sh@955 -- # kill 62009 00:05:25.508 17:08:55 -- common/autotest_common.sh@960 -- # wait 62009 00:05:25.767 [2024-04-25 17:08:55.614949] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.026 00:05:26.026 real 0m4.496s 00:05:26.026 user 0m8.604s 00:05:26.026 sys 0m0.343s 00:05:26.026 17:08:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.026 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 ************************************ 00:05:26.026 END TEST event_scheduler 00:05:26.026 ************************************ 00:05:26.026 17:08:55 -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.026 17:08:55 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.026 17:08:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.026 17:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.026 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 ************************************ 00:05:26.026 START TEST app_repeat 00:05:26.026 ************************************ 00:05:26.026 17:08:55 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:26.026 17:08:55 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.026 17:08:55 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.026 17:08:55 -- event/event.sh@13 -- # local nbd_list 00:05:26.026 17:08:55 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.026 17:08:55 -- event/event.sh@14 -- # local bdev_list 00:05:26.026 17:08:55 -- event/event.sh@15 -- # local repeat_times=4 00:05:26.026 17:08:55 -- event/event.sh@17 -- # modprobe nbd 00:05:26.026 17:08:55 -- event/event.sh@19 -- # repeat_pid=62129 00:05:26.026 17:08:55 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.026 17:08:55 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.026 Process app_repeat pid: 62129 00:05:26.026 17:08:55 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62129' 00:05:26.026 17:08:55 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.026 17:08:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.026 spdk_app_start Round 0 00:05:26.026 17:08:55 -- event/event.sh@25 -- # waitforlisten 62129 /var/tmp/spdk-nbd.sock 00:05:26.026 17:08:55 -- common/autotest_common.sh@817 -- # '[' -z 62129 ']' 00:05:26.026 17:08:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.026 17:08:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.026 17:08:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.026 17:08:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.026 17:08:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.026 [2024-04-25 17:08:55.956140] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:26.026 [2024-04-25 17:08:55.956226] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62129 ] 00:05:26.285 [2024-04-25 17:08:56.088237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.285 [2024-04-25 17:08:56.138454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.285 [2024-04-25 17:08:56.138460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.285 17:08:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.285 17:08:56 -- common/autotest_common.sh@850 -- # return 0 00:05:26.285 17:08:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.546 Malloc0 00:05:26.546 17:08:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.805 Malloc1 00:05:26.805 17:08:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.805 17:08:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.064 /dev/nbd0 00:05:27.064 17:08:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.064 17:08:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.064 17:08:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:27.064 17:08:56 -- common/autotest_common.sh@855 -- # local i 00:05:27.064 17:08:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.064 17:08:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.064 17:08:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:27.064 17:08:56 -- common/autotest_common.sh@859 -- # break 00:05:27.064 17:08:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.064 17:08:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.064 17:08:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.064 1+0 records in 00:05:27.064 1+0 records out 00:05:27.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372942 s, 11.0 MB/s 00:05:27.064 17:08:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.064 17:08:56 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.064 17:08:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.064 17:08:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.064 17:08:56 -- common/autotest_common.sh@875 -- # return 0 00:05:27.064 17:08:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.064 17:08:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.064 17:08:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.322 /dev/nbd1 00:05:27.322 17:08:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.322 17:08:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.322 17:08:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:27.322 17:08:57 -- common/autotest_common.sh@855 -- # local i 00:05:27.322 17:08:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:27.322 17:08:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:27.322 17:08:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:27.322 17:08:57 -- common/autotest_common.sh@859 -- # break 00:05:27.322 17:08:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:27.322 17:08:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:27.322 17:08:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.322 1+0 records in 00:05:27.322 1+0 records out 00:05:27.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293418 s, 14.0 MB/s 00:05:27.322 17:08:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.322 17:08:57 -- common/autotest_common.sh@872 -- # size=4096 00:05:27.322 17:08:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.580 17:08:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:27.580 17:08:57 -- common/autotest_common.sh@875 -- # return 0 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.580 { 00:05:27.580 "bdev_name": "Malloc0", 00:05:27.580 "nbd_device": "/dev/nbd0" 00:05:27.580 }, 00:05:27.580 { 00:05:27.580 "bdev_name": "Malloc1", 00:05:27.580 "nbd_device": "/dev/nbd1" 00:05:27.580 } 00:05:27.580 ]' 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.580 { 00:05:27.580 "bdev_name": "Malloc0", 00:05:27.580 "nbd_device": "/dev/nbd0" 00:05:27.580 }, 00:05:27.580 { 00:05:27.580 "bdev_name": "Malloc1", 00:05:27.580 "nbd_device": "/dev/nbd1" 00:05:27.580 } 00:05:27.580 ]' 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.580 /dev/nbd1' 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.580 /dev/nbd1' 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.580 17:08:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.839 256+0 records in 00:05:27.839 256+0 records out 00:05:27.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0086725 s, 121 MB/s 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.839 256+0 records in 00:05:27.839 256+0 records out 00:05:27.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024226 s, 43.3 MB/s 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.839 256+0 records in 00:05:27.839 256+0 records out 00:05:27.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02638 s, 39.7 MB/s 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.839 17:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@41 -- # break 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.098 17:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@41 -- # break 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.356 17:08:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@65 -- # true 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.615 17:08:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.615 17:08:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.874 17:08:58 -- event/event.sh@35 -- # sleep 3 00:05:28.874 [2024-04-25 17:08:58.814494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.133 [2024-04-25 17:08:58.861822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.133 [2024-04-25 17:08:58.861832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.133 [2024-04-25 17:08:58.889869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.133 [2024-04-25 17:08:58.889937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.419 spdk_app_start Round 1 00:05:32.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.419 17:09:01 -- event/event.sh@23 -- # for i in {0..2} 00:05:32.419 17:09:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.419 17:09:01 -- event/event.sh@25 -- # waitforlisten 62129 /var/tmp/spdk-nbd.sock 00:05:32.419 17:09:01 -- common/autotest_common.sh@817 -- # '[' -z 62129 ']' 00:05:32.419 17:09:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.419 17:09:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.419 17:09:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.419 17:09:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.419 17:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.419 17:09:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.419 17:09:01 -- common/autotest_common.sh@850 -- # return 0 00:05:32.419 17:09:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.419 Malloc0 00:05:32.419 17:09:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.677 Malloc1 00:05:32.677 17:09:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.677 17:09:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@12 -- # local i 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.678 17:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.936 /dev/nbd0 00:05:32.936 17:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.936 17:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.936 17:09:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:32.936 17:09:02 -- common/autotest_common.sh@855 -- # local i 00:05:32.936 17:09:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:32.936 17:09:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:32.936 17:09:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:32.936 17:09:02 -- common/autotest_common.sh@859 -- # break 00:05:32.936 17:09:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:32.936 17:09:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:32.936 17:09:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.936 1+0 records in 00:05:32.936 1+0 records out 00:05:32.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185178 s, 22.1 MB/s 00:05:32.936 17:09:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.936 17:09:02 -- common/autotest_common.sh@872 -- # size=4096 00:05:32.936 17:09:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.936 17:09:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:32.936 17:09:02 -- common/autotest_common.sh@875 -- # return 0 00:05:32.936 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.936 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.936 17:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.195 /dev/nbd1 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.195 17:09:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:33.195 17:09:02 -- common/autotest_common.sh@855 -- # local i 00:05:33.195 17:09:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:33.195 17:09:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:33.195 17:09:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:33.195 17:09:02 -- common/autotest_common.sh@859 -- # break 00:05:33.195 17:09:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:33.195 17:09:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:33.195 17:09:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.195 1+0 records in 00:05:33.195 1+0 records out 00:05:33.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433151 s, 9.5 MB/s 00:05:33.195 17:09:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.195 17:09:02 -- common/autotest_common.sh@872 -- # size=4096 00:05:33.195 17:09:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.195 17:09:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:33.195 17:09:02 -- common/autotest_common.sh@875 -- # return 0 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.195 17:09:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.453 { 00:05:33.453 "bdev_name": "Malloc0", 00:05:33.453 "nbd_device": "/dev/nbd0" 00:05:33.453 }, 00:05:33.453 { 00:05:33.453 "bdev_name": "Malloc1", 00:05:33.453 "nbd_device": "/dev/nbd1" 00:05:33.453 } 00:05:33.453 ]' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.453 { 00:05:33.453 "bdev_name": "Malloc0", 00:05:33.453 "nbd_device": "/dev/nbd0" 00:05:33.453 }, 00:05:33.453 { 00:05:33.453 "bdev_name": "Malloc1", 00:05:33.453 "nbd_device": "/dev/nbd1" 00:05:33.453 } 00:05:33.453 ]' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.453 /dev/nbd1' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.453 /dev/nbd1' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.453 256+0 records in 00:05:33.453 256+0 records out 00:05:33.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00858916 s, 122 MB/s 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.453 256+0 records in 00:05:33.453 256+0 records out 00:05:33.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250148 s, 41.9 MB/s 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.453 256+0 records in 00:05:33.453 256+0 records out 00:05:33.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026036 s, 40.3 MB/s 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.453 17:09:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@51 -- # local i 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.454 17:09:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@41 -- # break 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.712 17:09:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@41 -- # break 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.279 17:09:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.279 17:09:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.279 17:09:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.279 17:09:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@65 -- # true 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.539 17:09:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.539 17:09:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.799 17:09:04 -- event/event.sh@35 -- # sleep 3 00:05:34.799 [2024-04-25 17:09:04.726352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.058 [2024-04-25 17:09:04.783026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.058 [2024-04-25 17:09:04.783034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.058 [2024-04-25 17:09:04.813611] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.058 [2024-04-25 17:09:04.813675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.619 spdk_app_start Round 2 00:05:37.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.619 17:09:07 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.619 17:09:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.619 17:09:07 -- event/event.sh@25 -- # waitforlisten 62129 /var/tmp/spdk-nbd.sock 00:05:37.619 17:09:07 -- common/autotest_common.sh@817 -- # '[' -z 62129 ']' 00:05:37.619 17:09:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.619 17:09:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.619 17:09:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.619 17:09:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.619 17:09:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.877 17:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:37.878 17:09:07 -- common/autotest_common.sh@850 -- # return 0 00:05:37.878 17:09:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.444 Malloc0 00:05:38.444 17:09:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.445 Malloc1 00:05:38.704 17:09:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@12 -- # local i 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.704 /dev/nbd0 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.704 17:09:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.704 17:09:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:38.704 17:09:08 -- common/autotest_common.sh@855 -- # local i 00:05:38.704 17:09:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.704 17:09:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.704 17:09:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:38.704 17:09:08 -- common/autotest_common.sh@859 -- # break 00:05:38.704 17:09:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.704 17:09:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.704 17:09:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.964 1+0 records in 00:05:38.964 1+0 records out 00:05:38.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033802 s, 12.1 MB/s 00:05:38.964 17:09:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.964 17:09:08 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.964 17:09:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.964 17:09:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.964 17:09:08 -- common/autotest_common.sh@875 -- # return 0 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.964 /dev/nbd1 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.964 17:09:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:38.964 17:09:08 -- common/autotest_common.sh@855 -- # local i 00:05:38.964 17:09:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:38.964 17:09:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:38.964 17:09:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:38.964 17:09:08 -- common/autotest_common.sh@859 -- # break 00:05:38.964 17:09:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:38.964 17:09:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:38.964 17:09:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.964 1+0 records in 00:05:38.964 1+0 records out 00:05:38.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278737 s, 14.7 MB/s 00:05:38.964 17:09:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.964 17:09:08 -- common/autotest_common.sh@872 -- # size=4096 00:05:38.964 17:09:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.964 17:09:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:38.964 17:09:08 -- common/autotest_common.sh@875 -- # return 0 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.964 17:09:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.223 { 00:05:39.223 "bdev_name": "Malloc0", 00:05:39.223 "nbd_device": "/dev/nbd0" 00:05:39.223 }, 00:05:39.223 { 00:05:39.223 "bdev_name": "Malloc1", 00:05:39.223 "nbd_device": "/dev/nbd1" 00:05:39.223 } 00:05:39.223 ]' 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.223 { 00:05:39.223 "bdev_name": "Malloc0", 00:05:39.223 "nbd_device": "/dev/nbd0" 00:05:39.223 }, 00:05:39.223 { 00:05:39.223 "bdev_name": "Malloc1", 00:05:39.223 "nbd_device": "/dev/nbd1" 00:05:39.223 } 00:05:39.223 ]' 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.223 /dev/nbd1' 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.223 /dev/nbd1' 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.223 17:09:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.483 256+0 records in 00:05:39.483 256+0 records out 00:05:39.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767008 s, 137 MB/s 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.483 256+0 records in 00:05:39.483 256+0 records out 00:05:39.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270229 s, 38.8 MB/s 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.483 256+0 records in 00:05:39.483 256+0 records out 00:05:39.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262718 s, 39.9 MB/s 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@51 -- # local i 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.483 17:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@41 -- # break 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.743 17:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@41 -- # break 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.002 17:09:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.003 17:09:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@65 -- # true 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.261 17:09:10 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.261 17:09:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.520 17:09:10 -- event/event.sh@35 -- # sleep 3 00:05:40.779 [2024-04-25 17:09:10.593843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.779 [2024-04-25 17:09:10.640982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.779 [2024-04-25 17:09:10.640991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.779 [2024-04-25 17:09:10.669278] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.779 [2024-04-25 17:09:10.669329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.087 17:09:13 -- event/event.sh@38 -- # waitforlisten 62129 /var/tmp/spdk-nbd.sock 00:05:44.087 17:09:13 -- common/autotest_common.sh@817 -- # '[' -z 62129 ']' 00:05:44.087 17:09:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.087 17:09:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.087 17:09:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.087 17:09:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.087 17:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:44.087 17:09:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.087 17:09:13 -- common/autotest_common.sh@850 -- # return 0 00:05:44.087 17:09:13 -- event/event.sh@39 -- # killprocess 62129 00:05:44.087 17:09:13 -- common/autotest_common.sh@936 -- # '[' -z 62129 ']' 00:05:44.087 17:09:13 -- common/autotest_common.sh@940 -- # kill -0 62129 00:05:44.087 17:09:13 -- common/autotest_common.sh@941 -- # uname 00:05:44.087 17:09:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.087 17:09:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62129 00:05:44.087 killing process with pid 62129 00:05:44.087 17:09:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.087 17:09:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.087 17:09:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62129' 00:05:44.087 17:09:13 -- common/autotest_common.sh@955 -- # kill 62129 00:05:44.087 17:09:13 -- common/autotest_common.sh@960 -- # wait 62129 00:05:44.087 spdk_app_start is called in Round 0. 00:05:44.087 Shutdown signal received, stop current app iteration 00:05:44.087 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:44.087 spdk_app_start is called in Round 1. 00:05:44.087 Shutdown signal received, stop current app iteration 00:05:44.087 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:44.087 spdk_app_start is called in Round 2. 00:05:44.087 Shutdown signal received, stop current app iteration 00:05:44.087 Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 reinitialization... 00:05:44.087 spdk_app_start is called in Round 3. 00:05:44.087 Shutdown signal received, stop current app iteration 00:05:44.087 17:09:13 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.087 17:09:13 -- event/event.sh@42 -- # return 0 00:05:44.087 00:05:44.087 real 0m17.925s 00:05:44.087 user 0m40.486s 00:05:44.087 sys 0m2.596s 00:05:44.087 17:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.087 17:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:44.087 ************************************ 00:05:44.087 END TEST app_repeat 00:05:44.087 ************************************ 00:05:44.087 17:09:13 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.087 17:09:13 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.087 17:09:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.087 17:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.087 17:09:13 -- common/autotest_common.sh@10 -- # set +x 00:05:44.087 ************************************ 00:05:44.087 START TEST cpu_locks 00:05:44.087 ************************************ 00:05:44.087 17:09:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.087 * Looking for test storage... 00:05:44.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.087 17:09:14 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.087 17:09:14 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.087 17:09:14 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.087 17:09:14 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.087 17:09:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.087 17:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.087 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.347 ************************************ 00:05:44.347 START TEST default_locks 00:05:44.347 ************************************ 00:05:44.347 17:09:14 -- common/autotest_common.sh@1111 -- # default_locks 00:05:44.347 17:09:14 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62749 00:05:44.347 17:09:14 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.347 17:09:14 -- event/cpu_locks.sh@47 -- # waitforlisten 62749 00:05:44.347 17:09:14 -- common/autotest_common.sh@817 -- # '[' -z 62749 ']' 00:05:44.347 17:09:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.347 17:09:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.347 17:09:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.347 17:09:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.347 17:09:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.347 [2024-04-25 17:09:14.185453] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:44.347 [2024-04-25 17:09:14.186032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62749 ] 00:05:44.347 [2024-04-25 17:09:14.319409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.606 [2024-04-25 17:09:14.373674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.606 17:09:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.606 17:09:14 -- common/autotest_common.sh@850 -- # return 0 00:05:44.606 17:09:14 -- event/cpu_locks.sh@49 -- # locks_exist 62749 00:05:44.606 17:09:14 -- event/cpu_locks.sh@22 -- # lslocks -p 62749 00:05:44.606 17:09:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.173 17:09:14 -- event/cpu_locks.sh@50 -- # killprocess 62749 00:05:45.173 17:09:14 -- common/autotest_common.sh@936 -- # '[' -z 62749 ']' 00:05:45.173 17:09:14 -- common/autotest_common.sh@940 -- # kill -0 62749 00:05:45.173 17:09:14 -- common/autotest_common.sh@941 -- # uname 00:05:45.173 17:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.173 17:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62749 00:05:45.173 17:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.173 killing process with pid 62749 00:05:45.173 17:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.173 17:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62749' 00:05:45.173 17:09:14 -- common/autotest_common.sh@955 -- # kill 62749 00:05:45.173 17:09:14 -- common/autotest_common.sh@960 -- # wait 62749 00:05:45.462 17:09:15 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62749 00:05:45.462 17:09:15 -- common/autotest_common.sh@638 -- # local es=0 00:05:45.462 17:09:15 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62749 00:05:45.462 17:09:15 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:45.462 17:09:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.462 17:09:15 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:45.462 17:09:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:45.462 17:09:15 -- common/autotest_common.sh@641 -- # waitforlisten 62749 00:05:45.462 17:09:15 -- common/autotest_common.sh@817 -- # '[' -z 62749 ']' 00:05:45.463 17:09:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.463 17:09:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.463 17:09:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.463 17:09:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.463 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.463 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62749) - No such process 00:05:45.463 ERROR: process (pid: 62749) is no longer running 00:05:45.463 17:09:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.463 17:09:15 -- common/autotest_common.sh@850 -- # return 1 00:05:45.463 17:09:15 -- common/autotest_common.sh@641 -- # es=1 00:05:45.463 17:09:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:45.463 17:09:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:45.463 17:09:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:45.463 17:09:15 -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.463 17:09:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.463 17:09:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.463 17:09:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.463 00:05:45.463 real 0m1.122s 00:05:45.463 user 0m1.166s 00:05:45.463 sys 0m0.432s 00:05:45.463 17:09:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.463 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.463 ************************************ 00:05:45.463 END TEST default_locks 00:05:45.463 ************************************ 00:05:45.463 17:09:15 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.463 17:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.463 17:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.463 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.463 ************************************ 00:05:45.463 START TEST default_locks_via_rpc 00:05:45.463 ************************************ 00:05:45.463 17:09:15 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:45.463 17:09:15 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62798 00:05:45.463 17:09:15 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.463 17:09:15 -- event/cpu_locks.sh@63 -- # waitforlisten 62798 00:05:45.463 17:09:15 -- common/autotest_common.sh@817 -- # '[' -z 62798 ']' 00:05:45.463 17:09:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.463 17:09:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.463 17:09:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.463 17:09:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.463 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.743 [2024-04-25 17:09:15.416236] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:45.743 [2024-04-25 17:09:15.416354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62798 ] 00:05:45.743 [2024-04-25 17:09:15.550035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.743 [2024-04-25 17:09:15.599593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.002 17:09:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.002 17:09:15 -- common/autotest_common.sh@850 -- # return 0 00:05:46.002 17:09:15 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.002 17:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.002 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.002 17:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.002 17:09:15 -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.002 17:09:15 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.002 17:09:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.002 17:09:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.002 17:09:15 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.002 17:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.002 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:05:46.002 17:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.002 17:09:15 -- event/cpu_locks.sh@71 -- # locks_exist 62798 00:05:46.002 17:09:15 -- event/cpu_locks.sh@22 -- # lslocks -p 62798 00:05:46.002 17:09:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.262 17:09:16 -- event/cpu_locks.sh@73 -- # killprocess 62798 00:05:46.262 17:09:16 -- common/autotest_common.sh@936 -- # '[' -z 62798 ']' 00:05:46.262 17:09:16 -- common/autotest_common.sh@940 -- # kill -0 62798 00:05:46.262 17:09:16 -- common/autotest_common.sh@941 -- # uname 00:05:46.262 17:09:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.262 17:09:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62798 00:05:46.262 17:09:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.262 17:09:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.262 killing process with pid 62798 00:05:46.262 17:09:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62798' 00:05:46.262 17:09:16 -- common/autotest_common.sh@955 -- # kill 62798 00:05:46.262 17:09:16 -- common/autotest_common.sh@960 -- # wait 62798 00:05:46.521 00:05:46.521 real 0m1.127s 00:05:46.521 user 0m1.186s 00:05:46.521 sys 0m0.428s 00:05:46.521 17:09:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.521 17:09:16 -- common/autotest_common.sh@10 -- # set +x 00:05:46.521 ************************************ 00:05:46.521 END TEST default_locks_via_rpc 00:05:46.521 ************************************ 00:05:46.780 17:09:16 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.780 17:09:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.780 17:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.780 17:09:16 -- common/autotest_common.sh@10 -- # set +x 00:05:46.780 ************************************ 00:05:46.780 START TEST non_locking_app_on_locked_coremask 00:05:46.780 ************************************ 00:05:46.780 17:09:16 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:46.780 17:09:16 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62858 00:05:46.780 17:09:16 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.780 17:09:16 -- event/cpu_locks.sh@81 -- # waitforlisten 62858 /var/tmp/spdk.sock 00:05:46.780 17:09:16 -- common/autotest_common.sh@817 -- # '[' -z 62858 ']' 00:05:46.780 17:09:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.780 17:09:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.781 17:09:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.781 17:09:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.781 17:09:16 -- common/autotest_common.sh@10 -- # set +x 00:05:46.781 [2024-04-25 17:09:16.675573] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:46.781 [2024-04-25 17:09:16.675671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62858 ] 00:05:47.039 [2024-04-25 17:09:16.815119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.039 [2024-04-25 17:09:16.869393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.974 17:09:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.974 17:09:17 -- common/autotest_common.sh@850 -- # return 0 00:05:47.974 17:09:17 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62886 00:05:47.974 17:09:17 -- event/cpu_locks.sh@85 -- # waitforlisten 62886 /var/tmp/spdk2.sock 00:05:47.974 17:09:17 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.974 17:09:17 -- common/autotest_common.sh@817 -- # '[' -z 62886 ']' 00:05:47.974 17:09:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.974 17:09:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.974 17:09:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.974 17:09:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.974 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.974 [2024-04-25 17:09:17.671055] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:47.974 [2024-04-25 17:09:17.671179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62886 ] 00:05:47.974 [2024-04-25 17:09:17.810923] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.974 [2024-04-25 17:09:17.810984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.974 [2024-04-25 17:09:17.911885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.911 17:09:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.911 17:09:18 -- common/autotest_common.sh@850 -- # return 0 00:05:48.911 17:09:18 -- event/cpu_locks.sh@87 -- # locks_exist 62858 00:05:48.911 17:09:18 -- event/cpu_locks.sh@22 -- # lslocks -p 62858 00:05:48.911 17:09:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.478 17:09:19 -- event/cpu_locks.sh@89 -- # killprocess 62858 00:05:49.478 17:09:19 -- common/autotest_common.sh@936 -- # '[' -z 62858 ']' 00:05:49.478 17:09:19 -- common/autotest_common.sh@940 -- # kill -0 62858 00:05:49.478 17:09:19 -- common/autotest_common.sh@941 -- # uname 00:05:49.478 17:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.478 17:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62858 00:05:49.478 17:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.478 17:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.478 killing process with pid 62858 00:05:49.478 17:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62858' 00:05:49.478 17:09:19 -- common/autotest_common.sh@955 -- # kill 62858 00:05:49.478 17:09:19 -- common/autotest_common.sh@960 -- # wait 62858 00:05:50.045 17:09:19 -- event/cpu_locks.sh@90 -- # killprocess 62886 00:05:50.045 17:09:19 -- common/autotest_common.sh@936 -- # '[' -z 62886 ']' 00:05:50.045 17:09:19 -- common/autotest_common.sh@940 -- # kill -0 62886 00:05:50.045 17:09:19 -- common/autotest_common.sh@941 -- # uname 00:05:50.045 17:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.045 17:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62886 00:05:50.045 17:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.045 17:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.045 17:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62886' 00:05:50.045 killing process with pid 62886 00:05:50.045 17:09:19 -- common/autotest_common.sh@955 -- # kill 62886 00:05:50.045 17:09:19 -- common/autotest_common.sh@960 -- # wait 62886 00:05:50.314 00:05:50.314 real 0m3.532s 00:05:50.314 user 0m4.176s 00:05:50.314 sys 0m0.845s 00:05:50.314 17:09:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.314 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.315 ************************************ 00:05:50.315 END TEST non_locking_app_on_locked_coremask 00:05:50.315 ************************************ 00:05:50.315 17:09:20 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.315 17:09:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.315 17:09:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.315 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.315 ************************************ 00:05:50.315 START TEST locking_app_on_unlocked_coremask 00:05:50.315 ************************************ 00:05:50.315 17:09:20 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:50.315 17:09:20 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.315 17:09:20 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62959 00:05:50.315 17:09:20 -- event/cpu_locks.sh@99 -- # waitforlisten 62959 /var/tmp/spdk.sock 00:05:50.315 17:09:20 -- common/autotest_common.sh@817 -- # '[' -z 62959 ']' 00:05:50.315 17:09:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.315 17:09:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.315 17:09:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.315 17:09:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.315 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.579 [2024-04-25 17:09:20.315712] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:50.579 [2024-04-25 17:09:20.315808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:05:50.579 [2024-04-25 17:09:20.449352] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.579 [2024-04-25 17:09:20.449412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.579 [2024-04-25 17:09:20.501389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.838 17:09:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.838 17:09:20 -- common/autotest_common.sh@850 -- # return 0 00:05:50.838 17:09:20 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.838 17:09:20 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62979 00:05:50.838 17:09:20 -- event/cpu_locks.sh@103 -- # waitforlisten 62979 /var/tmp/spdk2.sock 00:05:50.838 17:09:20 -- common/autotest_common.sh@817 -- # '[' -z 62979 ']' 00:05:50.838 17:09:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.838 17:09:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.838 17:09:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.838 17:09:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.838 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.838 [2024-04-25 17:09:20.704978] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:50.838 [2024-04-25 17:09:20.705089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62979 ] 00:05:51.096 [2024-04-25 17:09:20.836595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.096 [2024-04-25 17:09:20.950377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.031 17:09:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.031 17:09:21 -- common/autotest_common.sh@850 -- # return 0 00:05:52.031 17:09:21 -- event/cpu_locks.sh@105 -- # locks_exist 62979 00:05:52.031 17:09:21 -- event/cpu_locks.sh@22 -- # lslocks -p 62979 00:05:52.031 17:09:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.599 17:09:22 -- event/cpu_locks.sh@107 -- # killprocess 62959 00:05:52.599 17:09:22 -- common/autotest_common.sh@936 -- # '[' -z 62959 ']' 00:05:52.599 17:09:22 -- common/autotest_common.sh@940 -- # kill -0 62959 00:05:52.599 17:09:22 -- common/autotest_common.sh@941 -- # uname 00:05:52.599 17:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.599 17:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62959 00:05:52.599 17:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.599 17:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.599 killing process with pid 62959 00:05:52.599 17:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62959' 00:05:52.599 17:09:22 -- common/autotest_common.sh@955 -- # kill 62959 00:05:52.599 17:09:22 -- common/autotest_common.sh@960 -- # wait 62959 00:05:53.169 17:09:23 -- event/cpu_locks.sh@108 -- # killprocess 62979 00:05:53.169 17:09:23 -- common/autotest_common.sh@936 -- # '[' -z 62979 ']' 00:05:53.169 17:09:23 -- common/autotest_common.sh@940 -- # kill -0 62979 00:05:53.169 17:09:23 -- common/autotest_common.sh@941 -- # uname 00:05:53.169 17:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.169 17:09:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62979 00:05:53.169 17:09:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.169 17:09:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.169 killing process with pid 62979 00:05:53.169 17:09:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62979' 00:05:53.169 17:09:23 -- common/autotest_common.sh@955 -- # kill 62979 00:05:53.169 17:09:23 -- common/autotest_common.sh@960 -- # wait 62979 00:05:53.428 00:05:53.428 real 0m3.055s 00:05:53.428 user 0m3.516s 00:05:53.428 sys 0m0.891s 00:05:53.428 17:09:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.428 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.428 ************************************ 00:05:53.428 END TEST locking_app_on_unlocked_coremask 00:05:53.428 ************************************ 00:05:53.428 17:09:23 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:53.428 17:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.428 17:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.428 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.687 ************************************ 00:05:53.687 START TEST locking_app_on_locked_coremask 00:05:53.687 ************************************ 00:05:53.687 17:09:23 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:53.687 17:09:23 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.687 17:09:23 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63051 00:05:53.687 17:09:23 -- event/cpu_locks.sh@116 -- # waitforlisten 63051 /var/tmp/spdk.sock 00:05:53.687 17:09:23 -- common/autotest_common.sh@817 -- # '[' -z 63051 ']' 00:05:53.687 17:09:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.687 17:09:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.687 17:09:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.687 17:09:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.687 17:09:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.687 [2024-04-25 17:09:23.508994] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:53.687 [2024-04-25 17:09:23.509090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63051 ] 00:05:53.687 [2024-04-25 17:09:23.649924] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.946 [2024-04-25 17:09:23.704516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.581 17:09:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.581 17:09:24 -- common/autotest_common.sh@850 -- # return 0 00:05:54.581 17:09:24 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63079 00:05:54.581 17:09:24 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.581 17:09:24 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63079 /var/tmp/spdk2.sock 00:05:54.581 17:09:24 -- common/autotest_common.sh@638 -- # local es=0 00:05:54.581 17:09:24 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63079 /var/tmp/spdk2.sock 00:05:54.581 17:09:24 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:54.581 17:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.581 17:09:24 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:54.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.581 17:09:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.581 17:09:24 -- common/autotest_common.sh@641 -- # waitforlisten 63079 /var/tmp/spdk2.sock 00:05:54.581 17:09:24 -- common/autotest_common.sh@817 -- # '[' -z 63079 ']' 00:05:54.581 17:09:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.581 17:09:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.581 17:09:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.581 17:09:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.581 17:09:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.581 [2024-04-25 17:09:24.500505] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:54.581 [2024-04-25 17:09:24.500604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:05:54.839 [2024-04-25 17:09:24.643561] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63051 has claimed it. 00:05:54.839 [2024-04-25 17:09:24.643644] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.407 ERROR: process (pid: 63079) is no longer running 00:05:55.407 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63079) - No such process 00:05:55.407 17:09:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.407 17:09:25 -- common/autotest_common.sh@850 -- # return 1 00:05:55.407 17:09:25 -- common/autotest_common.sh@641 -- # es=1 00:05:55.407 17:09:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.407 17:09:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.407 17:09:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.407 17:09:25 -- event/cpu_locks.sh@122 -- # locks_exist 63051 00:05:55.407 17:09:25 -- event/cpu_locks.sh@22 -- # lslocks -p 63051 00:05:55.407 17:09:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.666 17:09:25 -- event/cpu_locks.sh@124 -- # killprocess 63051 00:05:55.666 17:09:25 -- common/autotest_common.sh@936 -- # '[' -z 63051 ']' 00:05:55.666 17:09:25 -- common/autotest_common.sh@940 -- # kill -0 63051 00:05:55.666 17:09:25 -- common/autotest_common.sh@941 -- # uname 00:05:55.666 17:09:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.666 17:09:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63051 00:05:55.666 killing process with pid 63051 00:05:55.666 17:09:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.666 17:09:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.666 17:09:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63051' 00:05:55.666 17:09:25 -- common/autotest_common.sh@955 -- # kill 63051 00:05:55.666 17:09:25 -- common/autotest_common.sh@960 -- # wait 63051 00:05:55.925 ************************************ 00:05:55.925 END TEST locking_app_on_locked_coremask 00:05:55.925 ************************************ 00:05:55.925 00:05:55.925 real 0m2.454s 00:05:55.925 user 0m2.948s 00:05:55.925 sys 0m0.539s 00:05:55.925 17:09:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.925 17:09:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.185 17:09:25 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.185 17:09:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.185 17:09:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.185 17:09:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.185 ************************************ 00:05:56.185 START TEST locking_overlapped_coremask 00:05:56.185 ************************************ 00:05:56.185 17:09:26 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:56.185 17:09:26 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63141 00:05:56.185 17:09:26 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.185 17:09:26 -- event/cpu_locks.sh@133 -- # waitforlisten 63141 /var/tmp/spdk.sock 00:05:56.185 17:09:26 -- common/autotest_common.sh@817 -- # '[' -z 63141 ']' 00:05:56.185 17:09:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.185 17:09:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.185 17:09:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.185 17:09:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.185 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.185 [2024-04-25 17:09:26.066250] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:56.185 [2024-04-25 17:09:26.066334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:05:56.445 [2024-04-25 17:09:26.196952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.445 [2024-04-25 17:09:26.251644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.445 [2024-04-25 17:09:26.251785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.445 [2024-04-25 17:09:26.251787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.445 17:09:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.445 17:09:26 -- common/autotest_common.sh@850 -- # return 0 00:05:56.445 17:09:26 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63152 00:05:56.445 17:09:26 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63152 /var/tmp/spdk2.sock 00:05:56.445 17:09:26 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.445 17:09:26 -- common/autotest_common.sh@638 -- # local es=0 00:05:56.445 17:09:26 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63152 /var/tmp/spdk2.sock 00:05:56.445 17:09:26 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:56.445 17:09:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.445 17:09:26 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:56.445 17:09:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:56.445 17:09:26 -- common/autotest_common.sh@641 -- # waitforlisten 63152 /var/tmp/spdk2.sock 00:05:56.445 17:09:26 -- common/autotest_common.sh@817 -- # '[' -z 63152 ']' 00:05:56.445 17:09:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.445 17:09:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.445 17:09:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.445 17:09:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.445 17:09:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.704 [2024-04-25 17:09:26.468115] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:56.704 [2024-04-25 17:09:26.468224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:05:56.704 [2024-04-25 17:09:26.610093] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63141 has claimed it. 00:05:56.704 [2024-04-25 17:09:26.612782] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.272 ERROR: process (pid: 63152) is no longer running 00:05:57.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63152) - No such process 00:05:57.272 17:09:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.272 17:09:27 -- common/autotest_common.sh@850 -- # return 1 00:05:57.272 17:09:27 -- common/autotest_common.sh@641 -- # es=1 00:05:57.273 17:09:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.273 17:09:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:57.273 17:09:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.273 17:09:27 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.273 17:09:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.273 17:09:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.273 17:09:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.273 17:09:27 -- event/cpu_locks.sh@141 -- # killprocess 63141 00:05:57.273 17:09:27 -- common/autotest_common.sh@936 -- # '[' -z 63141 ']' 00:05:57.273 17:09:27 -- common/autotest_common.sh@940 -- # kill -0 63141 00:05:57.273 17:09:27 -- common/autotest_common.sh@941 -- # uname 00:05:57.273 17:09:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.273 17:09:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63141 00:05:57.273 killing process with pid 63141 00:05:57.273 17:09:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.273 17:09:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.273 17:09:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63141' 00:05:57.273 17:09:27 -- common/autotest_common.sh@955 -- # kill 63141 00:05:57.273 17:09:27 -- common/autotest_common.sh@960 -- # wait 63141 00:05:57.532 ************************************ 00:05:57.532 END TEST locking_overlapped_coremask 00:05:57.532 ************************************ 00:05:57.532 00:05:57.532 real 0m1.472s 00:05:57.532 user 0m3.978s 00:05:57.532 sys 0m0.285s 00:05:57.532 17:09:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.532 17:09:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.791 17:09:27 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.791 17:09:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.791 17:09:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.791 17:09:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.791 ************************************ 00:05:57.791 START TEST locking_overlapped_coremask_via_rpc 00:05:57.791 ************************************ 00:05:57.791 17:09:27 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:57.791 17:09:27 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63202 00:05:57.791 17:09:27 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.791 17:09:27 -- event/cpu_locks.sh@149 -- # waitforlisten 63202 /var/tmp/spdk.sock 00:05:57.792 17:09:27 -- common/autotest_common.sh@817 -- # '[' -z 63202 ']' 00:05:57.792 17:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.792 17:09:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.792 17:09:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.792 17:09:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.792 17:09:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.792 [2024-04-25 17:09:27.661299] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:57.792 [2024-04-25 17:09:27.661561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63202 ] 00:05:58.051 [2024-04-25 17:09:27.794643] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.051 [2024-04-25 17:09:27.794693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.051 [2024-04-25 17:09:27.846734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.051 [2024-04-25 17:09:27.846859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.051 [2024-04-25 17:09:27.846862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.051 17:09:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.051 17:09:27 -- common/autotest_common.sh@850 -- # return 0 00:05:58.051 17:09:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.051 17:09:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63224 00:05:58.051 17:09:27 -- event/cpu_locks.sh@153 -- # waitforlisten 63224 /var/tmp/spdk2.sock 00:05:58.051 17:09:27 -- common/autotest_common.sh@817 -- # '[' -z 63224 ']' 00:05:58.051 17:09:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.051 17:09:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.051 17:09:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.051 17:09:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.051 17:09:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.310 [2024-04-25 17:09:28.070466] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:05:58.310 [2024-04-25 17:09:28.070567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63224 ] 00:05:58.310 [2024-04-25 17:09:28.219271] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.310 [2024-04-25 17:09:28.219480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.569 [2024-04-25 17:09:28.323763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.569 [2024-04-25 17:09:28.327773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.569 [2024-04-25 17:09:28.327776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.139 17:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.139 17:09:29 -- common/autotest_common.sh@850 -- # return 0 00:05:59.139 17:09:29 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.139 17:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:59.139 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.139 17:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:59.139 17:09:29 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.139 17:09:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:59.139 17:09:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.139 17:09:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:59.139 17:09:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.140 17:09:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:59.140 17:09:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:59.140 17:09:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.140 17:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:59.140 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.140 [2024-04-25 17:09:29.052940] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63202 has claimed it. 00:05:59.140 2024/04/25 17:09:29 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:59.140 request: 00:05:59.140 { 00:05:59.140 "method": "framework_enable_cpumask_locks", 00:05:59.140 "params": {} 00:05:59.140 } 00:05:59.140 Got JSON-RPC error response 00:05:59.140 GoRPCClient: error on JSON-RPC call 00:05:59.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.140 17:09:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:59.140 17:09:29 -- common/autotest_common.sh@641 -- # es=1 00:05:59.140 17:09:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:59.140 17:09:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:59.140 17:09:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:59.140 17:09:29 -- event/cpu_locks.sh@158 -- # waitforlisten 63202 /var/tmp/spdk.sock 00:05:59.140 17:09:29 -- common/autotest_common.sh@817 -- # '[' -z 63202 ']' 00:05:59.140 17:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.140 17:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.140 17:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.140 17:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.140 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.399 17:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.399 17:09:29 -- common/autotest_common.sh@850 -- # return 0 00:05:59.399 17:09:29 -- event/cpu_locks.sh@159 -- # waitforlisten 63224 /var/tmp/spdk2.sock 00:05:59.399 17:09:29 -- common/autotest_common.sh@817 -- # '[' -z 63224 ']' 00:05:59.399 17:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.399 17:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.399 17:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.399 17:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.399 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.659 17:09:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.659 17:09:29 -- common/autotest_common.sh@850 -- # return 0 00:05:59.659 17:09:29 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.659 17:09:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.659 ************************************ 00:05:59.659 END TEST locking_overlapped_coremask_via_rpc 00:05:59.659 ************************************ 00:05:59.659 17:09:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.659 17:09:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.659 00:05:59.659 real 0m1.980s 00:05:59.659 user 0m1.126s 00:05:59.659 sys 0m0.174s 00:05:59.659 17:09:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.659 17:09:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.659 17:09:29 -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.659 17:09:29 -- event/cpu_locks.sh@15 -- # [[ -z 63202 ]] 00:05:59.659 17:09:29 -- event/cpu_locks.sh@15 -- # killprocess 63202 00:05:59.659 17:09:29 -- common/autotest_common.sh@936 -- # '[' -z 63202 ']' 00:05:59.659 17:09:29 -- common/autotest_common.sh@940 -- # kill -0 63202 00:05:59.659 17:09:29 -- common/autotest_common.sh@941 -- # uname 00:05:59.659 17:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.659 17:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63202 00:05:59.918 killing process with pid 63202 00:05:59.918 17:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.918 17:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.918 17:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63202' 00:05:59.918 17:09:29 -- common/autotest_common.sh@955 -- # kill 63202 00:05:59.918 17:09:29 -- common/autotest_common.sh@960 -- # wait 63202 00:06:00.177 17:09:29 -- event/cpu_locks.sh@16 -- # [[ -z 63224 ]] 00:06:00.177 17:09:29 -- event/cpu_locks.sh@16 -- # killprocess 63224 00:06:00.177 17:09:29 -- common/autotest_common.sh@936 -- # '[' -z 63224 ']' 00:06:00.177 17:09:29 -- common/autotest_common.sh@940 -- # kill -0 63224 00:06:00.178 17:09:29 -- common/autotest_common.sh@941 -- # uname 00:06:00.178 17:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.178 17:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63224 00:06:00.178 killing process with pid 63224 00:06:00.178 17:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:00.178 17:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:00.178 17:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63224' 00:06:00.178 17:09:29 -- common/autotest_common.sh@955 -- # kill 63224 00:06:00.178 17:09:29 -- common/autotest_common.sh@960 -- # wait 63224 00:06:00.438 17:09:30 -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.438 Process with pid 63202 is not found 00:06:00.438 17:09:30 -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.438 17:09:30 -- event/cpu_locks.sh@15 -- # [[ -z 63202 ]] 00:06:00.438 17:09:30 -- event/cpu_locks.sh@15 -- # killprocess 63202 00:06:00.438 17:09:30 -- common/autotest_common.sh@936 -- # '[' -z 63202 ']' 00:06:00.438 17:09:30 -- common/autotest_common.sh@940 -- # kill -0 63202 00:06:00.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63202) - No such process 00:06:00.438 17:09:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63202 is not found' 00:06:00.438 17:09:30 -- event/cpu_locks.sh@16 -- # [[ -z 63224 ]] 00:06:00.438 17:09:30 -- event/cpu_locks.sh@16 -- # killprocess 63224 00:06:00.438 17:09:30 -- common/autotest_common.sh@936 -- # '[' -z 63224 ']' 00:06:00.438 Process with pid 63224 is not found 00:06:00.438 17:09:30 -- common/autotest_common.sh@940 -- # kill -0 63224 00:06:00.438 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63224) - No such process 00:06:00.438 17:09:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63224 is not found' 00:06:00.438 17:09:30 -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.438 00:06:00.438 real 0m16.261s 00:06:00.438 user 0m28.264s 00:06:00.438 sys 0m4.432s 00:06:00.438 17:09:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.438 ************************************ 00:06:00.438 END TEST cpu_locks 00:06:00.438 ************************************ 00:06:00.438 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 ************************************ 00:06:00.438 END TEST event 00:06:00.438 ************************************ 00:06:00.438 00:06:00.438 real 0m43.333s 00:06:00.438 user 1m24.033s 00:06:00.438 sys 0m7.903s 00:06:00.438 17:09:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.438 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 17:09:30 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.438 17:09:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.438 17:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.438 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 ************************************ 00:06:00.438 START TEST thread 00:06:00.438 ************************************ 00:06:00.438 17:09:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.697 * Looking for test storage... 00:06:00.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:00.697 17:09:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.697 17:09:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:00.697 17:09:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.697 17:09:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.697 ************************************ 00:06:00.697 START TEST thread_poller_perf 00:06:00.697 ************************************ 00:06:00.697 17:09:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.697 [2024-04-25 17:09:30.557792] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:00.697 [2024-04-25 17:09:30.558382] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63375 ] 00:06:00.956 [2024-04-25 17:09:30.692815] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.956 [2024-04-25 17:09:30.744005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.956 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.894 ====================================== 00:06:01.894 busy:2208898144 (cyc) 00:06:01.894 total_run_count: 375000 00:06:01.894 tsc_hz: 2200000000 (cyc) 00:06:01.894 ====================================== 00:06:01.894 poller_cost: 5890 (cyc), 2677 (nsec) 00:06:01.894 00:06:01.894 real 0m1.303s 00:06:01.894 user 0m1.160s 00:06:01.894 sys 0m0.037s 00:06:01.894 17:09:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.894 17:09:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.894 ************************************ 00:06:01.894 END TEST thread_poller_perf 00:06:01.894 ************************************ 00:06:02.153 17:09:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.153 17:09:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:02.153 17:09:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.153 17:09:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.153 ************************************ 00:06:02.153 START TEST thread_poller_perf 00:06:02.153 ************************************ 00:06:02.153 17:09:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.153 [2024-04-25 17:09:31.983696] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:02.153 [2024-04-25 17:09:31.983796] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63420 ] 00:06:02.153 [2024-04-25 17:09:32.122184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.413 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.413 [2024-04-25 17:09:32.175520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.349 ====================================== 00:06:03.349 busy:2202273178 (cyc) 00:06:03.349 total_run_count: 4948000 00:06:03.349 tsc_hz: 2200000000 (cyc) 00:06:03.349 ====================================== 00:06:03.349 poller_cost: 445 (cyc), 202 (nsec) 00:06:03.349 00:06:03.349 real 0m1.291s 00:06:03.349 user 0m1.149s 00:06:03.349 sys 0m0.036s 00:06:03.349 ************************************ 00:06:03.349 END TEST thread_poller_perf 00:06:03.349 ************************************ 00:06:03.349 17:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.349 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.349 17:09:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.349 ************************************ 00:06:03.349 END TEST thread 00:06:03.349 ************************************ 00:06:03.349 00:06:03.349 real 0m2.919s 00:06:03.349 user 0m2.434s 00:06:03.349 sys 0m0.237s 00:06:03.349 17:09:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.349 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.607 17:09:33 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:03.607 17:09:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.607 17:09:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.607 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.607 ************************************ 00:06:03.607 START TEST accel 00:06:03.607 ************************************ 00:06:03.607 17:09:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:03.607 * Looking for test storage... 00:06:03.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:03.608 17:09:33 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:03.608 17:09:33 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:03.608 17:09:33 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.608 17:09:33 -- accel/accel.sh@62 -- # spdk_tgt_pid=63494 00:06:03.608 17:09:33 -- accel/accel.sh@63 -- # waitforlisten 63494 00:06:03.608 17:09:33 -- common/autotest_common.sh@817 -- # '[' -z 63494 ']' 00:06:03.608 17:09:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.608 17:09:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:03.608 17:09:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.608 17:09:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:03.608 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.608 17:09:33 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:03.608 17:09:33 -- accel/accel.sh@61 -- # build_accel_config 00:06:03.608 17:09:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.608 17:09:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.608 17:09:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.608 17:09:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.608 17:09:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.608 17:09:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.608 17:09:33 -- accel/accel.sh@41 -- # jq -r . 00:06:03.608 [2024-04-25 17:09:33.563275] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:03.608 [2024-04-25 17:09:33.563581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63494 ] 00:06:03.867 [2024-04-25 17:09:33.694850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.867 [2024-04-25 17:09:33.753437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.126 17:09:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:04.126 17:09:33 -- common/autotest_common.sh@850 -- # return 0 00:06:04.126 17:09:33 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:04.126 17:09:33 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:04.126 17:09:33 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:04.126 17:09:33 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:04.126 17:09:33 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.126 17:09:33 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.126 17:09:33 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:04.126 17:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.126 17:09:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.126 17:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # IFS== 00:06:04.126 17:09:33 -- accel/accel.sh@72 -- # read -r opc module 00:06:04.126 17:09:33 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.126 17:09:33 -- accel/accel.sh@75 -- # killprocess 63494 00:06:04.126 17:09:33 -- common/autotest_common.sh@936 -- # '[' -z 63494 ']' 00:06:04.126 17:09:33 -- common/autotest_common.sh@940 -- # kill -0 63494 00:06:04.126 17:09:33 -- common/autotest_common.sh@941 -- # uname 00:06:04.126 17:09:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.126 17:09:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63494 00:06:04.126 17:09:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.126 17:09:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.126 17:09:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63494' 00:06:04.126 killing process with pid 63494 00:06:04.126 17:09:34 -- common/autotest_common.sh@955 -- # kill 63494 00:06:04.126 17:09:34 -- common/autotest_common.sh@960 -- # wait 63494 00:06:04.385 17:09:34 -- accel/accel.sh@76 -- # trap - ERR 00:06:04.385 17:09:34 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:04.385 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:04.385 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.385 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.385 17:09:34 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:04.385 17:09:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:04.385 17:09:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.385 17:09:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.385 17:09:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.385 17:09:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.385 17:09:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.385 17:09:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.385 17:09:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.385 17:09:34 -- accel/accel.sh@41 -- # jq -r . 00:06:04.644 17:09:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.644 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.644 17:09:34 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:04.644 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.644 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.644 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:04.644 ************************************ 00:06:04.644 START TEST accel_missing_filename 00:06:04.644 ************************************ 00:06:04.644 17:09:34 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:04.644 17:09:34 -- common/autotest_common.sh@638 -- # local es=0 00:06:04.644 17:09:34 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:04.644 17:09:34 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:04.644 17:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.644 17:09:34 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:04.644 17:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.644 17:09:34 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:04.644 17:09:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.644 17:09:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:04.644 17:09:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.644 17:09:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.644 17:09:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.644 17:09:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.644 17:09:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.644 17:09:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.644 17:09:34 -- accel/accel.sh@41 -- # jq -r . 00:06:04.644 [2024-04-25 17:09:34.504675] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:04.644 [2024-04-25 17:09:34.505090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63558 ] 00:06:04.903 [2024-04-25 17:09:34.644347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.903 [2024-04-25 17:09:34.713414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.903 [2024-04-25 17:09:34.748866] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.903 [2024-04-25 17:09:34.790790] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:04.903 A filename is required. 00:06:05.161 17:09:34 -- common/autotest_common.sh@641 -- # es=234 00:06:05.161 17:09:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.161 17:09:34 -- common/autotest_common.sh@650 -- # es=106 00:06:05.161 17:09:34 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:05.161 17:09:34 -- common/autotest_common.sh@658 -- # es=1 00:06:05.162 17:09:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.162 00:06:05.162 real 0m0.404s 00:06:05.162 user 0m0.262s 00:06:05.162 sys 0m0.075s 00:06:05.162 ************************************ 00:06:05.162 END TEST accel_missing_filename 00:06:05.162 ************************************ 00:06:05.162 17:09:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.162 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.162 17:09:34 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.162 17:09:34 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:05.162 17:09:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.162 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.162 ************************************ 00:06:05.162 START TEST accel_compress_verify 00:06:05.162 ************************************ 00:06:05.162 17:09:34 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.162 17:09:34 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.162 17:09:34 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.162 17:09:34 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.162 17:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.162 17:09:34 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.162 17:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.162 17:09:34 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.162 17:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.162 17:09:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:05.162 17:09:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.162 17:09:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.162 17:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.162 17:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.162 17:09:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.162 17:09:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.162 17:09:35 -- accel/accel.sh@41 -- # jq -r . 00:06:05.162 [2024-04-25 17:09:35.025773] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:05.162 [2024-04-25 17:09:35.025882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63591 ] 00:06:05.421 [2024-04-25 17:09:35.163563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.421 [2024-04-25 17:09:35.214573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.421 [2024-04-25 17:09:35.243874] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.421 [2024-04-25 17:09:35.281629] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:05.421 00:06:05.421 Compression does not support the verify option, aborting. 00:06:05.421 17:09:35 -- common/autotest_common.sh@641 -- # es=161 00:06:05.421 17:09:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.421 17:09:35 -- common/autotest_common.sh@650 -- # es=33 00:06:05.421 17:09:35 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:05.421 17:09:35 -- common/autotest_common.sh@658 -- # es=1 00:06:05.421 17:09:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.421 00:06:05.421 real 0m0.370s 00:06:05.421 user 0m0.231s 00:06:05.421 sys 0m0.074s 00:06:05.421 17:09:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.421 ************************************ 00:06:05.421 END TEST accel_compress_verify 00:06:05.421 ************************************ 00:06:05.421 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.680 17:09:35 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.680 17:09:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.680 17:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.680 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.680 ************************************ 00:06:05.680 START TEST accel_wrong_workload 00:06:05.680 ************************************ 00:06:05.680 17:09:35 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:05.680 17:09:35 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.680 17:09:35 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:05.680 17:09:35 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.680 17:09:35 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:05.680 17:09:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:05.680 17:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.680 17:09:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.680 17:09:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.680 17:09:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.680 17:09:35 -- accel/accel.sh@41 -- # jq -r . 00:06:05.680 Unsupported workload type: foobar 00:06:05.680 [2024-04-25 17:09:35.505830] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:05.680 accel_perf options: 00:06:05.680 [-h help message] 00:06:05.680 [-q queue depth per core] 00:06:05.680 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.680 [-T number of threads per core 00:06:05.680 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.680 [-t time in seconds] 00:06:05.680 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.680 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.680 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.680 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.680 [-S for crc32c workload, use this seed value (default 0) 00:06:05.680 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.680 [-f for fill workload, use this BYTE value (default 255) 00:06:05.680 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.680 [-y verify result if this switch is on] 00:06:05.680 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.680 Can be used to spread operations across a wider range of memory. 00:06:05.680 17:09:35 -- common/autotest_common.sh@641 -- # es=1 00:06:05.680 17:09:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.680 17:09:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:05.680 17:09:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.680 00:06:05.680 real 0m0.036s 00:06:05.680 user 0m0.014s 00:06:05.680 sys 0m0.017s 00:06:05.680 ************************************ 00:06:05.680 END TEST accel_wrong_workload 00:06:05.680 ************************************ 00:06:05.680 17:09:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.680 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.680 17:09:35 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.680 17:09:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:05.680 17:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.680 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.680 ************************************ 00:06:05.680 START TEST accel_negative_buffers 00:06:05.680 ************************************ 00:06:05.680 17:09:35 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.680 17:09:35 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.680 17:09:35 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:05.680 17:09:35 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.680 17:09:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.680 17:09:35 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:05.680 17:09:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:05.680 17:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.680 17:09:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.680 17:09:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.680 17:09:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.680 17:09:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.680 17:09:35 -- accel/accel.sh@41 -- # jq -r . 00:06:05.680 -x option must be non-negative. 00:06:05.680 [2024-04-25 17:09:35.652557] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:05.939 accel_perf options: 00:06:05.939 [-h help message] 00:06:05.939 [-q queue depth per core] 00:06:05.939 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.939 [-T number of threads per core 00:06:05.939 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.939 [-t time in seconds] 00:06:05.939 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.939 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.939 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.939 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.939 [-S for crc32c workload, use this seed value (default 0) 00:06:05.939 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.939 [-f for fill workload, use this BYTE value (default 255) 00:06:05.939 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.939 [-y verify result if this switch is on] 00:06:05.939 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.939 Can be used to spread operations across a wider range of memory. 00:06:05.939 17:09:35 -- common/autotest_common.sh@641 -- # es=1 00:06:05.939 17:09:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.939 17:09:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:05.939 ************************************ 00:06:05.939 END TEST accel_negative_buffers 00:06:05.939 ************************************ 00:06:05.939 17:09:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.939 00:06:05.939 real 0m0.030s 00:06:05.939 user 0m0.016s 00:06:05.939 sys 0m0.014s 00:06:05.939 17:09:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.939 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.939 17:09:35 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:05.939 17:09:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:05.939 17:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.939 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.939 ************************************ 00:06:05.939 START TEST accel_crc32c 00:06:05.939 ************************************ 00:06:05.939 17:09:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:05.939 17:09:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.939 17:09:35 -- accel/accel.sh@17 -- # local accel_module 00:06:05.939 17:09:35 -- accel/accel.sh@19 -- # IFS=: 00:06:05.939 17:09:35 -- accel/accel.sh@19 -- # read -r var val 00:06:05.939 17:09:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:05.940 17:09:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:05.940 17:09:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.940 17:09:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.940 17:09:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.940 17:09:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.940 17:09:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.940 17:09:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.940 17:09:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.940 17:09:35 -- accel/accel.sh@41 -- # jq -r . 00:06:05.940 [2024-04-25 17:09:35.803084] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:05.940 [2024-04-25 17:09:35.803174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63660 ] 00:06:06.199 [2024-04-25 17:09:35.941543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.199 [2024-04-25 17:09:36.009903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=0x1 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=crc32c 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=32 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=software 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=32 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=32 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val=1 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.199 17:09:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.199 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.199 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.200 17:09:36 -- accel/accel.sh@20 -- # val=Yes 00:06:06.200 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.200 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.200 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:06.200 17:09:36 -- accel/accel.sh@20 -- # val= 00:06:06.200 17:09:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # IFS=: 00:06:06.200 17:09:36 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.579 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.579 ************************************ 00:06:07.579 END TEST accel_crc32c 00:06:07.579 ************************************ 00:06:07.579 17:09:37 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:07.579 17:09:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.579 00:06:07.579 real 0m1.406s 00:06:07.579 user 0m1.237s 00:06:07.579 sys 0m0.077s 00:06:07.579 17:09:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.579 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.579 17:09:37 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:07.579 17:09:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:07.579 17:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.579 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.579 ************************************ 00:06:07.579 START TEST accel_crc32c_C2 00:06:07.579 ************************************ 00:06:07.579 17:09:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:07.579 17:09:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.579 17:09:37 -- accel/accel.sh@17 -- # local accel_module 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.579 17:09:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:07.579 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.579 17:09:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:07.579 17:09:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.579 17:09:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.579 17:09:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.579 17:09:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.579 17:09:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.579 17:09:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.579 17:09:37 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.579 17:09:37 -- accel/accel.sh@41 -- # jq -r . 00:06:07.579 [2024-04-25 17:09:37.338549] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:07.579 [2024-04-25 17:09:37.338678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63704 ] 00:06:07.579 [2024-04-25 17:09:37.485361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.579 [2024-04-25 17:09:37.537091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=0x1 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=crc32c 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=0 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=software 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=32 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=32 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=1 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val=Yes 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:07.838 17:09:37 -- accel/accel.sh@20 -- # val= 00:06:07.838 17:09:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # IFS=: 00:06:07.838 17:09:37 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.773 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.773 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.773 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.773 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.773 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.774 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.774 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.774 ************************************ 00:06:08.774 END TEST accel_crc32c_C2 00:06:08.774 ************************************ 00:06:08.774 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.774 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.774 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.774 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.774 17:09:38 -- accel/accel.sh@20 -- # val= 00:06:08.774 17:09:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:08.774 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:08.774 17:09:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.774 17:09:38 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.774 17:09:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.774 00:06:08.774 real 0m1.387s 00:06:08.774 user 0m1.213s 00:06:08.774 sys 0m0.082s 00:06:08.774 17:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.774 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 17:09:38 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:08.774 17:09:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:08.774 17:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.774 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:09.032 ************************************ 00:06:09.032 START TEST accel_copy 00:06:09.032 ************************************ 00:06:09.032 17:09:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:09.032 17:09:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.032 17:09:38 -- accel/accel.sh@17 -- # local accel_module 00:06:09.032 17:09:38 -- accel/accel.sh@19 -- # IFS=: 00:06:09.032 17:09:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:09.032 17:09:38 -- accel/accel.sh@19 -- # read -r var val 00:06:09.032 17:09:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:09.032 17:09:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.032 17:09:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.032 17:09:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.032 17:09:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.032 17:09:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.032 17:09:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.032 17:09:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.032 17:09:38 -- accel/accel.sh@41 -- # jq -r . 00:06:09.032 [2024-04-25 17:09:38.839660] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:09.032 [2024-04-25 17:09:38.839942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63737 ] 00:06:09.032 [2024-04-25 17:09:38.975711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.291 [2024-04-25 17:09:39.027165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=0x1 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=copy 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=software 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=32 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=32 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=1 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val=Yes 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:09.291 17:09:39 -- accel/accel.sh@20 -- # val= 00:06:09.291 17:09:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # IFS=: 00:06:09.291 17:09:39 -- accel/accel.sh@19 -- # read -r var val 00:06:10.226 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.226 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.226 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.226 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.226 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.226 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.226 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.226 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.226 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.226 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.227 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.227 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.227 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.227 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.227 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.227 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.227 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.227 17:09:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.227 17:09:40 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:10.227 17:09:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.227 00:06:10.227 real 0m1.378s 00:06:10.227 user 0m1.216s 00:06:10.227 sys 0m0.071s 00:06:10.227 17:09:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.227 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.227 ************************************ 00:06:10.227 END TEST accel_copy 00:06:10.227 ************************************ 00:06:10.485 17:09:40 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.485 17:09:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:10.485 17:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.485 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.485 ************************************ 00:06:10.485 START TEST accel_fill 00:06:10.485 ************************************ 00:06:10.486 17:09:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.486 17:09:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.486 17:09:40 -- accel/accel.sh@17 -- # local accel_module 00:06:10.486 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.486 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.486 17:09:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.486 17:09:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.486 17:09:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.486 17:09:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.486 17:09:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.486 17:09:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.486 17:09:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.486 17:09:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.486 17:09:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.486 17:09:40 -- accel/accel.sh@41 -- # jq -r . 00:06:10.486 [2024-04-25 17:09:40.326583] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:10.486 [2024-04-25 17:09:40.326660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63782 ] 00:06:10.745 [2024-04-25 17:09:40.463034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.745 [2024-04-25 17:09:40.512332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=0x1 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=fill 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=0x80 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=software 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=64 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=64 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=1 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val=Yes 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:10.745 17:09:40 -- accel/accel.sh@20 -- # val= 00:06:10.745 17:09:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # IFS=: 00:06:10.745 17:09:40 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@20 -- # val= 00:06:11.705 17:09:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.705 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.705 17:09:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.705 17:09:41 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:11.705 17:09:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.705 00:06:11.705 real 0m1.366s 00:06:11.705 user 0m1.206s 00:06:11.705 sys 0m0.067s 00:06:11.705 17:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.705 ************************************ 00:06:11.705 END TEST accel_fill 00:06:11.705 ************************************ 00:06:11.705 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.964 17:09:41 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.964 17:09:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.964 17:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.964 17:09:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.964 ************************************ 00:06:11.964 START TEST accel_copy_crc32c 00:06:11.964 ************************************ 00:06:11.964 17:09:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.964 17:09:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.964 17:09:41 -- accel/accel.sh@17 -- # local accel_module 00:06:11.964 17:09:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.964 17:09:41 -- accel/accel.sh@19 -- # IFS=: 00:06:11.964 17:09:41 -- accel/accel.sh@19 -- # read -r var val 00:06:11.964 17:09:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.964 17:09:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.964 17:09:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.964 17:09:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.964 17:09:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.964 17:09:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.964 17:09:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.964 17:09:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.964 17:09:41 -- accel/accel.sh@41 -- # jq -r . 00:06:11.964 [2024-04-25 17:09:41.816319] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:11.964 [2024-04-25 17:09:41.816420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63815 ] 00:06:12.223 [2024-04-25 17:09:41.950626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.223 [2024-04-25 17:09:42.005033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.223 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.223 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.223 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.223 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=0x1 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=0 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=software 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=32 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=32 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=1 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val=Yes 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:12.224 17:09:42 -- accel/accel.sh@20 -- # val= 00:06:12.224 17:09:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # IFS=: 00:06:12.224 17:09:42 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.601 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.601 17:09:43 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:13.601 17:09:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.601 00:06:13.601 real 0m1.369s 00:06:13.601 user 0m1.204s 00:06:13.601 sys 0m0.074s 00:06:13.601 17:09:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.601 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.601 ************************************ 00:06:13.601 END TEST accel_copy_crc32c 00:06:13.601 ************************************ 00:06:13.601 17:09:43 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.601 17:09:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:13.601 17:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.601 17:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.601 ************************************ 00:06:13.601 START TEST accel_copy_crc32c_C2 00:06:13.601 ************************************ 00:06:13.601 17:09:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.601 17:09:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.601 17:09:43 -- accel/accel.sh@17 -- # local accel_module 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.601 17:09:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:13.601 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.601 17:09:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:13.601 17:09:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.601 17:09:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.601 17:09:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.601 17:09:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.601 17:09:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.602 17:09:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.602 17:09:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.602 17:09:43 -- accel/accel.sh@41 -- # jq -r . 00:06:13.602 [2024-04-25 17:09:43.303901] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:13.602 [2024-04-25 17:09:43.303983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:06:13.602 [2024-04-25 17:09:43.438916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.602 [2024-04-25 17:09:43.490752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=0x1 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=0 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=software 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=32 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=32 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=1 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val=Yes 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:13.602 17:09:43 -- accel/accel.sh@20 -- # val= 00:06:13.602 17:09:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # IFS=: 00:06:13.602 17:09:43 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@20 -- # val= 00:06:14.978 17:09:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.978 17:09:44 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.978 17:09:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.978 00:06:14.978 real 0m1.372s 00:06:14.978 user 0m1.206s 00:06:14.978 sys 0m0.075s 00:06:14.978 17:09:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.978 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.978 ************************************ 00:06:14.978 END TEST accel_copy_crc32c_C2 00:06:14.978 ************************************ 00:06:14.978 17:09:44 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.978 17:09:44 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:14.978 17:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.978 17:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.978 ************************************ 00:06:14.978 START TEST accel_dualcast 00:06:14.978 ************************************ 00:06:14.978 17:09:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:14.978 17:09:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.978 17:09:44 -- accel/accel.sh@17 -- # local accel_module 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # IFS=: 00:06:14.978 17:09:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.978 17:09:44 -- accel/accel.sh@19 -- # read -r var val 00:06:14.978 17:09:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.978 17:09:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.978 17:09:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.978 17:09:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.978 17:09:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.978 17:09:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.978 17:09:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.978 17:09:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.978 17:09:44 -- accel/accel.sh@41 -- # jq -r . 00:06:14.978 [2024-04-25 17:09:44.784157] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:14.978 [2024-04-25 17:09:44.784234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:06:14.978 [2024-04-25 17:09:44.920149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.236 [2024-04-25 17:09:44.974908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=0x1 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=dualcast 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=software 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=32 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=32 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=1 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.236 17:09:45 -- accel/accel.sh@20 -- # val=Yes 00:06:15.236 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.236 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.237 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.237 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.237 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.237 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.237 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:15.237 17:09:45 -- accel/accel.sh@20 -- # val= 00:06:15.237 17:09:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.237 17:09:45 -- accel/accel.sh@19 -- # IFS=: 00:06:15.237 17:09:45 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 ************************************ 00:06:16.170 END TEST accel_dualcast 00:06:16.170 ************************************ 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.170 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.170 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.170 17:09:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.170 17:09:46 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.170 17:09:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.170 00:06:16.170 real 0m1.375s 00:06:16.170 user 0m1.216s 00:06:16.170 sys 0m0.067s 00:06:16.170 17:09:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.170 17:09:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.448 17:09:46 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.448 17:09:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.448 17:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.448 17:09:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.448 ************************************ 00:06:16.448 START TEST accel_compare 00:06:16.448 ************************************ 00:06:16.448 17:09:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:16.448 17:09:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.448 17:09:46 -- accel/accel.sh@17 -- # local accel_module 00:06:16.448 17:09:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.448 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.448 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.448 17:09:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.448 17:09:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.448 17:09:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.448 17:09:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.448 17:09:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.448 17:09:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.448 17:09:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.448 17:09:46 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.448 17:09:46 -- accel/accel.sh@41 -- # jq -r . 00:06:16.448 [2024-04-25 17:09:46.280930] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:16.448 [2024-04-25 17:09:46.281042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63931 ] 00:06:16.448 [2024-04-25 17:09:46.415449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.706 [2024-04-25 17:09:46.466421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val=0x1 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val=compare 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.706 17:09:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.706 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.706 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val=software 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@22 -- # accel_module=software 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val=32 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val=32 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val=1 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val=Yes 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:16.707 17:09:46 -- accel/accel.sh@20 -- # val= 00:06:16.707 17:09:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # IFS=: 00:06:16.707 17:09:46 -- accel/accel.sh@19 -- # read -r var val 00:06:17.710 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.710 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.710 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.710 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.710 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.710 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.710 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.710 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.710 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.711 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.711 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.711 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.711 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.711 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.711 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:17.711 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.711 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.711 ************************************ 00:06:17.711 END TEST accel_compare 00:06:17.711 ************************************ 00:06:17.711 17:09:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.711 17:09:47 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:17.711 17:09:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.711 00:06:17.711 real 0m1.384s 00:06:17.711 user 0m1.219s 00:06:17.711 sys 0m0.070s 00:06:17.711 17:09:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.711 17:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.711 17:09:47 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:17.711 17:09:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:17.711 17:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.711 17:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.970 ************************************ 00:06:17.970 START TEST accel_xor 00:06:17.970 ************************************ 00:06:17.970 17:09:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:17.970 17:09:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.970 17:09:47 -- accel/accel.sh@17 -- # local accel_module 00:06:17.970 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:17.970 17:09:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:17.970 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:17.970 17:09:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:17.970 17:09:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.970 17:09:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.970 17:09:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.970 17:09:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.970 17:09:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.970 17:09:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.970 17:09:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.970 17:09:47 -- accel/accel.sh@41 -- # jq -r . 00:06:17.970 [2024-04-25 17:09:47.777078] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:17.970 [2024-04-25 17:09:47.777202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63969 ] 00:06:17.970 [2024-04-25 17:09:47.912219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.228 [2024-04-25 17:09:47.962684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.228 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:18.228 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.228 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.228 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:18.228 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:18.228 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.228 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.228 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:47 -- accel/accel.sh@20 -- # val=0x1 00:06:18.229 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:18.229 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:47 -- accel/accel.sh@20 -- # val= 00:06:18.229 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:47 -- accel/accel.sh@20 -- # val=xor 00:06:18.229 17:09:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:47 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.229 17:09:47 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=2 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val= 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=software 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=32 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=32 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=1 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val=Yes 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val= 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:18.229 17:09:48 -- accel/accel.sh@20 -- # val= 00:06:18.229 17:09:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # IFS=: 00:06:18.229 17:09:48 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.165 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.165 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.165 17:09:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.165 17:09:49 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.165 17:09:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.165 00:06:19.165 real 0m1.383s 00:06:19.165 user 0m1.221s 00:06:19.165 sys 0m0.071s 00:06:19.165 17:09:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.166 ************************************ 00:06:19.166 END TEST accel_xor 00:06:19.166 ************************************ 00:06:19.166 17:09:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.424 17:09:49 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.424 17:09:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:19.424 17:09:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.424 17:09:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.424 ************************************ 00:06:19.424 START TEST accel_xor 00:06:19.424 ************************************ 00:06:19.424 17:09:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.424 17:09:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.424 17:09:49 -- accel/accel.sh@17 -- # local accel_module 00:06:19.424 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.424 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.424 17:09:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.424 17:09:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.424 17:09:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.424 17:09:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.424 17:09:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.424 17:09:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.424 17:09:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.424 17:09:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.424 17:09:49 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.424 17:09:49 -- accel/accel.sh@41 -- # jq -r . 00:06:19.424 [2024-04-25 17:09:49.283983] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:19.424 [2024-04-25 17:09:49.284111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64009 ] 00:06:19.684 [2024-04-25 17:09:49.420184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.684 [2024-04-25 17:09:49.468023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=0x1 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=xor 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=3 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=software 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=32 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=32 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=1 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val=Yes 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:19.684 17:09:49 -- accel/accel.sh@20 -- # val= 00:06:19.684 17:09:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # IFS=: 00:06:19.684 17:09:49 -- accel/accel.sh@19 -- # read -r var val 00:06:21.074 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.074 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.074 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.074 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.074 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.074 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.074 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.074 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.074 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.075 17:09:50 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.075 17:09:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.075 00:06:21.075 real 0m1.371s 00:06:21.075 user 0m1.214s 00:06:21.075 sys 0m0.065s 00:06:21.075 17:09:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.075 ************************************ 00:06:21.075 END TEST accel_xor 00:06:21.075 ************************************ 00:06:21.075 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 17:09:50 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:21.075 17:09:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.075 17:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.075 17:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:21.075 ************************************ 00:06:21.075 START TEST accel_dif_verify 00:06:21.075 ************************************ 00:06:21.075 17:09:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:21.075 17:09:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.075 17:09:50 -- accel/accel.sh@17 -- # local accel_module 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.075 17:09:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.075 17:09:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.075 17:09:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.075 17:09:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.075 17:09:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.075 17:09:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.075 17:09:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.075 17:09:50 -- accel/accel.sh@41 -- # jq -r . 00:06:21.075 [2024-04-25 17:09:50.764176] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:21.075 [2024-04-25 17:09:50.764309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64048 ] 00:06:21.075 [2024-04-25 17:09:50.900684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.075 [2024-04-25 17:09:50.948553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=0x1 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=dif_verify 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=software 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=32 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=32 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=1 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val=No 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:21.075 17:09:50 -- accel/accel.sh@20 -- # val= 00:06:21.075 17:09:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # IFS=: 00:06:21.075 17:09:50 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.453 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.453 17:09:52 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:22.453 17:09:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.453 00:06:22.453 real 0m1.371s 00:06:22.453 user 0m1.217s 00:06:22.453 sys 0m0.064s 00:06:22.453 17:09:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.453 ************************************ 00:06:22.453 END TEST accel_dif_verify 00:06:22.453 ************************************ 00:06:22.453 17:09:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.453 17:09:52 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:22.453 17:09:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:22.453 17:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.453 17:09:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.453 ************************************ 00:06:22.453 START TEST accel_dif_generate 00:06:22.453 ************************************ 00:06:22.453 17:09:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:22.453 17:09:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.453 17:09:52 -- accel/accel.sh@17 -- # local accel_module 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.453 17:09:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.453 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.453 17:09:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.453 17:09:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.453 17:09:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.453 17:09:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.453 17:09:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.453 17:09:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.453 17:09:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.453 17:09:52 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.453 17:09:52 -- accel/accel.sh@41 -- # jq -r . 00:06:22.453 [2024-04-25 17:09:52.245775] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:22.453 [2024-04-25 17:09:52.245877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64087 ] 00:06:22.453 [2024-04-25 17:09:52.381288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.453 [2024-04-25 17:09:52.429741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=0x1 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=dif_generate 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=software 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=32 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=32 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=1 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val=No 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:22.712 17:09:52 -- accel/accel.sh@20 -- # val= 00:06:22.712 17:09:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # IFS=: 00:06:22.712 17:09:52 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:23.648 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.648 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.648 17:09:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.648 17:09:53 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:23.648 ************************************ 00:06:23.648 END TEST accel_dif_generate 00:06:23.648 ************************************ 00:06:23.648 17:09:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.648 00:06:23.648 real 0m1.362s 00:06:23.648 user 0m1.200s 00:06:23.648 sys 0m0.073s 00:06:23.648 17:09:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.648 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.908 17:09:53 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:23.908 17:09:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:23.908 17:09:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.908 17:09:53 -- common/autotest_common.sh@10 -- # set +x 00:06:23.908 ************************************ 00:06:23.908 START TEST accel_dif_generate_copy 00:06:23.908 ************************************ 00:06:23.908 17:09:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:23.908 17:09:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.908 17:09:53 -- accel/accel.sh@17 -- # local accel_module 00:06:23.908 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:23.908 17:09:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:23.908 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:23.908 17:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:23.908 17:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.908 17:09:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.908 17:09:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.908 17:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.908 17:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.908 17:09:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.908 17:09:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.908 17:09:53 -- accel/accel.sh@41 -- # jq -r . 00:06:23.908 [2024-04-25 17:09:53.731136] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:23.908 [2024-04-25 17:09:53.731212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64120 ] 00:06:23.908 [2024-04-25 17:09:53.864346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.168 [2024-04-25 17:09:53.914673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=0x1 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=software 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=32 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=32 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=1 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val=No 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:24.168 17:09:53 -- accel/accel.sh@20 -- # val= 00:06:24.168 17:09:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # IFS=: 00:06:24.168 17:09:53 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.104 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.104 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.104 17:09:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.104 17:09:55 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:25.104 17:09:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.104 00:06:25.104 real 0m1.364s 00:06:25.104 user 0m1.207s 00:06:25.104 sys 0m0.067s 00:06:25.104 17:09:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.104 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.104 ************************************ 00:06:25.104 END TEST accel_dif_generate_copy 00:06:25.104 ************************************ 00:06:25.364 17:09:55 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:25.364 17:09:55 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.364 17:09:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:25.364 17:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.364 17:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.364 ************************************ 00:06:25.364 START TEST accel_comp 00:06:25.364 ************************************ 00:06:25.364 17:09:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.364 17:09:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.364 17:09:55 -- accel/accel.sh@17 -- # local accel_module 00:06:25.364 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.364 17:09:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.364 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.364 17:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.364 17:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.364 17:09:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.364 17:09:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.364 17:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.364 17:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.364 17:09:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.364 17:09:55 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.364 17:09:55 -- accel/accel.sh@41 -- # jq -r . 00:06:25.364 [2024-04-25 17:09:55.218042] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:25.364 [2024-04-25 17:09:55.218189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64158 ] 00:06:25.623 [2024-04-25 17:09:55.357504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.623 [2024-04-25 17:09:55.407668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=0x1 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=compress 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=software 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=32 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=32 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=1 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val=No 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:25.623 17:09:55 -- accel/accel.sh@20 -- # val= 00:06:25.623 17:09:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # IFS=: 00:06:25.623 17:09:55 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.000 17:09:56 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:27.000 17:09:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.000 00:06:27.000 real 0m1.382s 00:06:27.000 user 0m1.220s 00:06:27.000 sys 0m0.073s 00:06:27.000 17:09:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.000 ************************************ 00:06:27.000 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:27.000 END TEST accel_comp 00:06:27.000 ************************************ 00:06:27.000 17:09:56 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.000 17:09:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.000 17:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.000 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:27.000 ************************************ 00:06:27.000 START TEST accel_decomp 00:06:27.000 ************************************ 00:06:27.000 17:09:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.000 17:09:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.000 17:09:56 -- accel/accel.sh@17 -- # local accel_module 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.000 17:09:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:27.000 17:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.000 17:09:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.000 17:09:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.000 17:09:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.000 17:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.000 17:09:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.000 17:09:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.000 17:09:56 -- accel/accel.sh@41 -- # jq -r . 00:06:27.000 [2024-04-25 17:09:56.719507] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:27.000 [2024-04-25 17:09:56.719613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64198 ] 00:06:27.000 [2024-04-25 17:09:56.856865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.000 [2024-04-25 17:09:56.905324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=0x1 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=decompress 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=software 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=32 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=32 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=1 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val=Yes 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:27.000 17:09:56 -- accel/accel.sh@20 -- # val= 00:06:27.000 17:09:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # IFS=: 00:06:27.000 17:09:56 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.378 ************************************ 00:06:28.378 END TEST accel_decomp 00:06:28.378 ************************************ 00:06:28.378 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.378 17:09:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.378 17:09:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.378 00:06:28.378 real 0m1.381s 00:06:28.378 user 0m1.224s 00:06:28.378 sys 0m0.066s 00:06:28.378 17:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.378 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.378 17:09:58 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:28.378 17:09:58 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:28.378 17:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.378 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.378 ************************************ 00:06:28.378 START TEST accel_decmop_full 00:06:28.378 ************************************ 00:06:28.378 17:09:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:28.378 17:09:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.378 17:09:58 -- accel/accel.sh@17 -- # local accel_module 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.378 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.378 17:09:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:28.378 17:09:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:28.378 17:09:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.378 17:09:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.378 17:09:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.378 17:09:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.378 17:09:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.378 17:09:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.378 17:09:58 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.378 17:09:58 -- accel/accel.sh@41 -- # jq -r . 00:06:28.378 [2024-04-25 17:09:58.215122] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:28.378 [2024-04-25 17:09:58.215194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64236 ] 00:06:28.378 [2024-04-25 17:09:58.351658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.637 [2024-04-25 17:09:58.401233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val=0x1 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val=decompress 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.637 17:09:58 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.637 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.637 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=software 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=32 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=32 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=1 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val=Yes 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:28.638 17:09:58 -- accel/accel.sh@20 -- # val= 00:06:28.638 17:09:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # IFS=: 00:06:28.638 17:09:58 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.015 17:09:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.015 17:09:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.015 00:06:30.015 real 0m1.384s 00:06:30.015 user 0m1.219s 00:06:30.015 sys 0m0.075s 00:06:30.015 ************************************ 00:06:30.015 END TEST accel_decmop_full 00:06:30.015 ************************************ 00:06:30.015 17:09:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.015 17:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.015 17:09:59 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.015 17:09:59 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:30.015 17:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.015 17:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.015 ************************************ 00:06:30.015 START TEST accel_decomp_mcore 00:06:30.015 ************************************ 00:06:30.015 17:09:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.015 17:09:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.015 17:09:59 -- accel/accel.sh@17 -- # local accel_module 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.015 17:09:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:30.015 17:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.015 17:09:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.015 17:09:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.015 17:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.015 17:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.015 17:09:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.015 17:09:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.015 17:09:59 -- accel/accel.sh@41 -- # jq -r . 00:06:30.015 [2024-04-25 17:09:59.714593] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:30.015 [2024-04-25 17:09:59.714691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64275 ] 00:06:30.015 [2024-04-25 17:09:59.843589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.015 [2024-04-25 17:09:59.893651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.015 [2024-04-25 17:09:59.893853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.015 [2024-04-25 17:09:59.893857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.015 [2024-04-25 17:09:59.893780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=0xf 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=decompress 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=software 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=32 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=32 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=1 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.015 17:09:59 -- accel/accel.sh@20 -- # val=Yes 00:06:30.015 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.015 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.016 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.016 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.016 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.016 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:30.016 17:09:59 -- accel/accel.sh@20 -- # val= 00:06:30.016 17:09:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.016 17:09:59 -- accel/accel.sh@19 -- # IFS=: 00:06:30.016 17:09:59 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.394 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.394 17:10:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.394 17:10:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.394 ************************************ 00:06:31.394 END TEST accel_decomp_mcore 00:06:31.394 ************************************ 00:06:31.394 00:06:31.394 real 0m1.374s 00:06:31.394 user 0m4.400s 00:06:31.394 sys 0m0.093s 00:06:31.394 17:10:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.394 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:06:31.394 17:10:01 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.394 17:10:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:31.394 17:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.394 17:10:01 -- common/autotest_common.sh@10 -- # set +x 00:06:31.394 ************************************ 00:06:31.394 START TEST accel_decomp_full_mcore 00:06:31.394 ************************************ 00:06:31.394 17:10:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.394 17:10:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.394 17:10:01 -- accel/accel.sh@17 -- # local accel_module 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 17:10:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.394 17:10:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:31.394 17:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.394 17:10:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.394 17:10:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.394 17:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.394 17:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.394 17:10:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.394 17:10:01 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.394 17:10:01 -- accel/accel.sh@41 -- # jq -r . 00:06:31.394 [2024-04-25 17:10:01.197617] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:31.394 [2024-04-25 17:10:01.197705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64312 ] 00:06:31.394 [2024-04-25 17:10:01.330414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.653 [2024-04-25 17:10:01.386596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.653 [2024-04-25 17:10:01.386695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.653 [2024-04-25 17:10:01.386812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.653 [2024-04-25 17:10:01.386813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.653 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.653 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.653 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val=0xf 00:06:31.653 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.653 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.653 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.653 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=decompress 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=software 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=32 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=32 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=1 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val=Yes 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:31.654 17:10:01 -- accel/accel.sh@20 -- # val= 00:06:31.654 17:10:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # IFS=: 00:06:31.654 17:10:01 -- accel/accel.sh@19 -- # read -r var val 00:06:32.589 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:32.848 ************************************ 00:06:32.848 END TEST accel_decomp_full_mcore 00:06:32.848 ************************************ 00:06:32.848 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.848 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.848 17:10:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.848 17:10:02 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.848 17:10:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.849 00:06:32.849 real 0m1.399s 00:06:32.849 user 0m4.455s 00:06:32.849 sys 0m0.085s 00:06:32.849 17:10:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.849 17:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.849 17:10:02 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.849 17:10:02 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:32.849 17:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.849 17:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.849 ************************************ 00:06:32.849 START TEST accel_decomp_mthread 00:06:32.849 ************************************ 00:06:32.849 17:10:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.849 17:10:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.849 17:10:02 -- accel/accel.sh@17 -- # local accel_module 00:06:32.849 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:32.849 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:32.849 17:10:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.849 17:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:32.849 17:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.849 17:10:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.849 17:10:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.849 17:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.849 17:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.849 17:10:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.849 17:10:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.849 17:10:02 -- accel/accel.sh@41 -- # jq -r . 00:06:32.849 [2024-04-25 17:10:02.717880] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:32.849 [2024-04-25 17:10:02.717981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64358 ] 00:06:33.108 [2024-04-25 17:10:02.854330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.108 [2024-04-25 17:10:02.906241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=0x1 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=decompress 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=software 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=32 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=32 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=2 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val=Yes 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:33.108 17:10:02 -- accel/accel.sh@20 -- # val= 00:06:33.108 17:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # IFS=: 00:06:33.108 17:10:02 -- accel/accel.sh@19 -- # read -r var val 00:06:34.484 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.484 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.485 17:10:04 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.485 17:10:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.485 00:06:34.485 real 0m1.386s 00:06:34.485 user 0m1.217s 00:06:34.485 sys 0m0.071s 00:06:34.485 17:10:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.485 ************************************ 00:06:34.485 END TEST accel_decomp_mthread 00:06:34.485 ************************************ 00:06:34.485 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.485 17:10:04 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.485 17:10:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:34.485 17:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.485 17:10:04 -- common/autotest_common.sh@10 -- # set +x 00:06:34.485 ************************************ 00:06:34.485 START TEST accel_deomp_full_mthread 00:06:34.485 ************************************ 00:06:34.485 17:10:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.485 17:10:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.485 17:10:04 -- accel/accel.sh@17 -- # local accel_module 00:06:34.485 17:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.485 17:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.485 17:10:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.485 17:10:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.485 17:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.485 17:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.485 17:10:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.485 17:10:04 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.485 17:10:04 -- accel/accel.sh@41 -- # jq -r . 00:06:34.485 [2024-04-25 17:10:04.229746] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:34.485 [2024-04-25 17:10:04.229831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64391 ] 00:06:34.485 [2024-04-25 17:10:04.364562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.485 [2024-04-25 17:10:04.414787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val=0x1 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val=decompress 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.485 17:10:04 -- accel/accel.sh@20 -- # val=software 00:06:34.485 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.485 17:10:04 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.485 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val=32 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val=32 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val=2 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val=Yes 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:34.744 17:10:04 -- accel/accel.sh@20 -- # val= 00:06:34.744 17:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # IFS=: 00:06:34.744 17:10:04 -- accel/accel.sh@19 -- # read -r var val 00:06:35.678 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@20 -- # val= 00:06:35.679 17:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # IFS=: 00:06:35.679 17:10:05 -- accel/accel.sh@19 -- # read -r var val 00:06:35.679 17:10:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.679 17:10:05 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.679 17:10:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.679 ************************************ 00:06:35.679 END TEST accel_deomp_full_mthread 00:06:35.679 ************************************ 00:06:35.679 00:06:35.679 real 0m1.416s 00:06:35.679 user 0m1.241s 00:06:35.679 sys 0m0.077s 00:06:35.679 17:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.679 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.937 17:10:05 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:35.937 17:10:05 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:35.937 17:10:05 -- accel/accel.sh@137 -- # build_accel_config 00:06:35.937 17:10:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:35.937 17:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.937 17:10:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.937 17:10:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.937 17:10:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.937 17:10:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.937 17:10:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.937 17:10:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.937 17:10:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.938 17:10:05 -- accel/accel.sh@41 -- # jq -r . 00:06:35.938 ************************************ 00:06:35.938 START TEST accel_dif_functional_tests 00:06:35.938 ************************************ 00:06:35.938 17:10:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:35.938 [2024-04-25 17:10:05.796316] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:35.938 [2024-04-25 17:10:05.796417] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64440 ] 00:06:36.196 [2024-04-25 17:10:05.933816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.196 [2024-04-25 17:10:05.983592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.196 [2024-04-25 17:10:05.983740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.196 [2024-04-25 17:10:05.983741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.196 00:06:36.196 00:06:36.196 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.196 http://cunit.sourceforge.net/ 00:06:36.196 00:06:36.196 00:06:36.196 Suite: accel_dif 00:06:36.196 Test: verify: DIF generated, GUARD check ...passed 00:06:36.196 Test: verify: DIF generated, APPTAG check ...passed 00:06:36.196 Test: verify: DIF generated, REFTAG check ...passed 00:06:36.196 Test: verify: DIF not generated, GUARD check ...passed 00:06:36.196 Test: verify: DIF not generated, APPTAG check ...[2024-04-25 17:10:06.031845] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.196 [2024-04-25 17:10:06.031951] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.196 [2024-04-25 17:10:06.031991] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.196 passed 00:06:36.196 Test: verify: DIF not generated, REFTAG check ...passed 00:06:36.196 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:36.196 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:36.196 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:36.196 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-04-25 17:10:06.032017] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.196 [2024-04-25 17:10:06.032043] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.196 [2024-04-25 17:10:06.032069] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.196 [2024-04-25 17:10:06.032237] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:36.196 passed 00:06:36.196 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:36.196 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:36.196 Test: generate copy: DIF generated, GUARD check ...[2024-04-25 17:10:06.032465] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:36.196 passed 00:06:36.196 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:36.196 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:36.196 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:36.196 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:36.196 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:36.196 Test: generate copy: iovecs-len validate ...[2024-04-25 17:10:06.033597] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:36.196 passed 00:06:36.196 Test: generate copy: buffer alignment validate ...passed 00:06:36.196 00:06:36.196 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.196 suites 1 1 n/a 0 0 00:06:36.196 tests 20 20 20 0 0 00:06:36.196 asserts 204 204 204 0 n/a 00:06:36.196 00:06:36.196 Elapsed time = 0.007 seconds 00:06:36.455 ************************************ 00:06:36.455 END TEST accel_dif_functional_tests 00:06:36.455 ************************************ 00:06:36.455 00:06:36.455 real 0m0.453s 00:06:36.455 user 0m0.467s 00:06:36.455 sys 0m0.098s 00:06:36.455 17:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.455 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.455 00:06:36.455 real 0m32.820s 00:06:36.455 user 0m33.836s 00:06:36.455 sys 0m3.543s 00:06:36.455 ************************************ 00:06:36.455 END TEST accel 00:06:36.455 ************************************ 00:06:36.455 17:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.455 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.455 17:10:06 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:36.455 17:10:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.455 17:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.455 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.455 ************************************ 00:06:36.455 START TEST accel_rpc 00:06:36.455 ************************************ 00:06:36.455 17:10:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:36.714 * Looking for test storage... 00:06:36.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:36.715 17:10:06 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.715 17:10:06 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64509 00:06:36.715 17:10:06 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:36.715 17:10:06 -- accel/accel_rpc.sh@15 -- # waitforlisten 64509 00:06:36.715 17:10:06 -- common/autotest_common.sh@817 -- # '[' -z 64509 ']' 00:06:36.715 17:10:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.715 17:10:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.715 17:10:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.715 17:10:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.715 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.715 [2024-04-25 17:10:06.512045] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:36.715 [2024-04-25 17:10:06.512145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64509 ] 00:06:36.715 [2024-04-25 17:10:06.641329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.973 [2024-04-25 17:10:06.697418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.973 17:10:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.973 17:10:06 -- common/autotest_common.sh@850 -- # return 0 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:36.973 17:10:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.973 17:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.973 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.973 ************************************ 00:06:36.973 START TEST accel_assign_opcode 00:06:36.973 ************************************ 00:06:36.973 17:10:06 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:36.973 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.973 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.973 [2024-04-25 17:10:06.837929] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:36.973 17:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:36.973 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.973 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.973 [2024-04-25 17:10:06.849896] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:36.973 17:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.973 17:10:06 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:36.973 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.973 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:37.232 17:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.232 17:10:06 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:37.232 17:10:06 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:37.232 17:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:37.232 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:37.232 17:10:06 -- accel/accel_rpc.sh@42 -- # grep software 00:06:37.232 17:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.232 software 00:06:37.232 ************************************ 00:06:37.232 END TEST accel_assign_opcode 00:06:37.232 ************************************ 00:06:37.232 00:06:37.232 real 0m0.193s 00:06:37.232 user 0m0.046s 00:06:37.232 sys 0m0.009s 00:06:37.232 17:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.232 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.232 17:10:07 -- accel/accel_rpc.sh@55 -- # killprocess 64509 00:06:37.232 17:10:07 -- common/autotest_common.sh@936 -- # '[' -z 64509 ']' 00:06:37.232 17:10:07 -- common/autotest_common.sh@940 -- # kill -0 64509 00:06:37.232 17:10:07 -- common/autotest_common.sh@941 -- # uname 00:06:37.232 17:10:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.232 17:10:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64509 00:06:37.232 killing process with pid 64509 00:06:37.232 17:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.232 17:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.232 17:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64509' 00:06:37.232 17:10:07 -- common/autotest_common.sh@955 -- # kill 64509 00:06:37.232 17:10:07 -- common/autotest_common.sh@960 -- # wait 64509 00:06:37.491 00:06:37.491 real 0m0.998s 00:06:37.491 user 0m1.036s 00:06:37.491 sys 0m0.324s 00:06:37.491 ************************************ 00:06:37.491 END TEST accel_rpc 00:06:37.491 ************************************ 00:06:37.491 17:10:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.491 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.491 17:10:07 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.491 17:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.491 17:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.491 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.750 ************************************ 00:06:37.750 START TEST app_cmdline 00:06:37.750 ************************************ 00:06:37.750 17:10:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:37.750 * Looking for test storage... 00:06:37.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:37.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.751 17:10:07 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:37.751 17:10:07 -- app/cmdline.sh@17 -- # spdk_tgt_pid=64617 00:06:37.751 17:10:07 -- app/cmdline.sh@18 -- # waitforlisten 64617 00:06:37.751 17:10:07 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:37.751 17:10:07 -- common/autotest_common.sh@817 -- # '[' -z 64617 ']' 00:06:37.751 17:10:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.751 17:10:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:37.751 17:10:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.751 17:10:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:37.751 17:10:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.751 [2024-04-25 17:10:07.624667] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:37.751 [2024-04-25 17:10:07.625084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64617 ] 00:06:38.010 [2024-04-25 17:10:07.761547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.010 [2024-04-25 17:10:07.813672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.053 17:10:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.053 17:10:08 -- common/autotest_common.sh@850 -- # return 0 00:06:39.053 17:10:08 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:39.053 { 00:06:39.053 "fields": { 00:06:39.053 "commit": "06472fb6d", 00:06:39.053 "major": 24, 00:06:39.053 "minor": 5, 00:06:39.053 "patch": 0, 00:06:39.053 "suffix": "-pre" 00:06:39.053 }, 00:06:39.053 "version": "SPDK v24.05-pre git sha1 06472fb6d" 00:06:39.053 } 00:06:39.053 17:10:08 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.053 17:10:08 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.053 17:10:08 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.053 17:10:08 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.053 17:10:08 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.053 17:10:08 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.053 17:10:08 -- app/cmdline.sh@26 -- # sort 00:06:39.053 17:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.053 17:10:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.053 17:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.053 17:10:08 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.053 17:10:08 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.053 17:10:08 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.053 17:10:08 -- common/autotest_common.sh@638 -- # local es=0 00:06:39.053 17:10:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.053 17:10:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.053 17:10:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.053 17:10:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.053 17:10:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.053 17:10:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.053 17:10:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.053 17:10:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:39.053 17:10:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:39.053 17:10:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.316 2024/04/25 17:10:09 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:39.316 request: 00:06:39.316 { 00:06:39.316 "method": "env_dpdk_get_mem_stats", 00:06:39.316 "params": {} 00:06:39.317 } 00:06:39.317 Got JSON-RPC error response 00:06:39.317 GoRPCClient: error on JSON-RPC call 00:06:39.317 17:10:09 -- common/autotest_common.sh@641 -- # es=1 00:06:39.317 17:10:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:39.317 17:10:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:39.317 17:10:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:39.317 17:10:09 -- app/cmdline.sh@1 -- # killprocess 64617 00:06:39.317 17:10:09 -- common/autotest_common.sh@936 -- # '[' -z 64617 ']' 00:06:39.317 17:10:09 -- common/autotest_common.sh@940 -- # kill -0 64617 00:06:39.317 17:10:09 -- common/autotest_common.sh@941 -- # uname 00:06:39.317 17:10:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.317 17:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64617 00:06:39.317 17:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.317 17:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.317 killing process with pid 64617 00:06:39.317 17:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64617' 00:06:39.317 17:10:09 -- common/autotest_common.sh@955 -- # kill 64617 00:06:39.317 17:10:09 -- common/autotest_common.sh@960 -- # wait 64617 00:06:39.595 00:06:39.595 real 0m2.024s 00:06:39.595 user 0m2.704s 00:06:39.595 sys 0m0.357s 00:06:39.595 17:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.595 ************************************ 00:06:39.595 END TEST app_cmdline 00:06:39.595 ************************************ 00:06:39.595 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.595 17:10:09 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.595 17:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.595 17:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.595 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.853 ************************************ 00:06:39.853 START TEST version 00:06:39.853 ************************************ 00:06:39.853 17:10:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:39.853 * Looking for test storage... 00:06:39.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.853 17:10:09 -- app/version.sh@17 -- # get_header_version major 00:06:39.853 17:10:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.853 17:10:09 -- app/version.sh@14 -- # tr -d '"' 00:06:39.853 17:10:09 -- app/version.sh@14 -- # cut -f2 00:06:39.853 17:10:09 -- app/version.sh@17 -- # major=24 00:06:39.853 17:10:09 -- app/version.sh@18 -- # get_header_version minor 00:06:39.853 17:10:09 -- app/version.sh@14 -- # cut -f2 00:06:39.853 17:10:09 -- app/version.sh@14 -- # tr -d '"' 00:06:39.853 17:10:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.853 17:10:09 -- app/version.sh@18 -- # minor=5 00:06:39.853 17:10:09 -- app/version.sh@19 -- # get_header_version patch 00:06:39.853 17:10:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.853 17:10:09 -- app/version.sh@14 -- # cut -f2 00:06:39.853 17:10:09 -- app/version.sh@14 -- # tr -d '"' 00:06:39.853 17:10:09 -- app/version.sh@19 -- # patch=0 00:06:39.853 17:10:09 -- app/version.sh@20 -- # get_header_version suffix 00:06:39.853 17:10:09 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:39.853 17:10:09 -- app/version.sh@14 -- # cut -f2 00:06:39.853 17:10:09 -- app/version.sh@14 -- # tr -d '"' 00:06:39.853 17:10:09 -- app/version.sh@20 -- # suffix=-pre 00:06:39.853 17:10:09 -- app/version.sh@22 -- # version=24.5 00:06:39.853 17:10:09 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.853 17:10:09 -- app/version.sh@28 -- # version=24.5rc0 00:06:39.853 17:10:09 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:39.853 17:10:09 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:39.853 17:10:09 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:39.853 17:10:09 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:39.853 00:06:39.853 real 0m0.157s 00:06:39.853 user 0m0.090s 00:06:39.853 sys 0m0.093s 00:06:39.853 17:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.853 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:39.853 ************************************ 00:06:39.853 END TEST version 00:06:39.853 ************************************ 00:06:39.853 17:10:09 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:39.853 17:10:09 -- spdk/autotest.sh@194 -- # uname -s 00:06:39.853 17:10:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:39.853 17:10:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.853 17:10:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:39.853 17:10:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:39.853 17:10:09 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:39.853 17:10:09 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:39.853 17:10:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:39.853 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.112 17:10:09 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:40.112 17:10:09 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:40.112 17:10:09 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:40.112 17:10:09 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:40.112 17:10:09 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:40.112 17:10:09 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:40.112 17:10:09 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.112 17:10:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:40.112 17:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.112 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.112 ************************************ 00:06:40.112 START TEST nvmf_tcp 00:06:40.112 ************************************ 00:06:40.112 17:10:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.112 * Looking for test storage... 00:06:40.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:40.112 17:10:10 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:40.112 17:10:10 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:40.113 17:10:10 -- nvmf/common.sh@7 -- # uname -s 00:06:40.113 17:10:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.113 17:10:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.113 17:10:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.113 17:10:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.113 17:10:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.113 17:10:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.113 17:10:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.113 17:10:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.113 17:10:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.113 17:10:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.113 17:10:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:40.113 17:10:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:40.113 17:10:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.113 17:10:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.113 17:10:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:40.113 17:10:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.113 17:10:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.113 17:10:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.113 17:10:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.113 17:10:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.113 17:10:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.113 17:10:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.113 17:10:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.113 17:10:10 -- paths/export.sh@5 -- # export PATH 00:06:40.113 17:10:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.113 17:10:10 -- nvmf/common.sh@47 -- # : 0 00:06:40.113 17:10:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.113 17:10:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.113 17:10:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.113 17:10:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.113 17:10:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.113 17:10:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.113 17:10:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.113 17:10:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:40.113 17:10:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:40.113 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:40.113 17:10:10 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:40.113 17:10:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:40.113 17:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.113 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.373 ************************************ 00:06:40.373 START TEST nvmf_example 00:06:40.373 ************************************ 00:06:40.373 17:10:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:40.373 * Looking for test storage... 00:06:40.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:40.373 17:10:10 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:40.373 17:10:10 -- nvmf/common.sh@7 -- # uname -s 00:06:40.373 17:10:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.373 17:10:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.373 17:10:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.373 17:10:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.373 17:10:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.373 17:10:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.373 17:10:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.373 17:10:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.373 17:10:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.373 17:10:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:40.373 17:10:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:40.373 17:10:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.373 17:10:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.373 17:10:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:40.373 17:10:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.373 17:10:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:40.373 17:10:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.373 17:10:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.373 17:10:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.373 17:10:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.373 17:10:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.373 17:10:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.373 17:10:10 -- paths/export.sh@5 -- # export PATH 00:06:40.373 17:10:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.373 17:10:10 -- nvmf/common.sh@47 -- # : 0 00:06:40.373 17:10:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.373 17:10:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.373 17:10:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.373 17:10:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.373 17:10:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.373 17:10:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.373 17:10:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.373 17:10:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.373 17:10:10 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:40.373 17:10:10 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:40.373 17:10:10 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:40.373 17:10:10 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:40.373 17:10:10 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:40.373 17:10:10 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:40.373 17:10:10 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:40.373 17:10:10 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:40.373 17:10:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:40.373 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.373 17:10:10 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:40.373 17:10:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:40.373 17:10:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.373 17:10:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:40.373 17:10:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:40.373 17:10:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:40.373 17:10:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.373 17:10:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.373 17:10:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.373 17:10:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:06:40.373 17:10:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:06:40.373 17:10:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.373 17:10:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.373 17:10:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:40.373 17:10:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:40.373 17:10:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:40.373 17:10:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:40.373 17:10:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:40.373 17:10:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.373 17:10:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:40.373 17:10:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:40.373 17:10:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:40.373 17:10:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:40.373 17:10:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:40.373 Cannot find device "nvmf_init_br" 00:06:40.373 17:10:10 -- nvmf/common.sh@154 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:40.373 Cannot find device "nvmf_tgt_br" 00:06:40.373 17:10:10 -- nvmf/common.sh@155 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:40.373 Cannot find device "nvmf_tgt_br2" 00:06:40.373 17:10:10 -- nvmf/common.sh@156 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:40.373 Cannot find device "nvmf_init_br" 00:06:40.373 17:10:10 -- nvmf/common.sh@157 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:40.373 Cannot find device "nvmf_tgt_br" 00:06:40.373 17:10:10 -- nvmf/common.sh@158 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:40.373 Cannot find device "nvmf_tgt_br2" 00:06:40.373 17:10:10 -- nvmf/common.sh@159 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:40.373 Cannot find device "nvmf_br" 00:06:40.373 17:10:10 -- nvmf/common.sh@160 -- # true 00:06:40.373 17:10:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:40.632 Cannot find device "nvmf_init_if" 00:06:40.632 17:10:10 -- nvmf/common.sh@161 -- # true 00:06:40.632 17:10:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:40.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:40.632 17:10:10 -- nvmf/common.sh@162 -- # true 00:06:40.632 17:10:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:40.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:40.632 17:10:10 -- nvmf/common.sh@163 -- # true 00:06:40.632 17:10:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:40.632 17:10:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:40.632 17:10:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:40.632 17:10:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:40.632 17:10:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:40.632 17:10:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:40.632 17:10:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:40.632 17:10:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:40.632 17:10:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:40.632 17:10:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:40.632 17:10:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:40.632 17:10:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:40.632 17:10:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:40.632 17:10:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:40.632 17:10:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:40.632 17:10:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:40.632 17:10:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:40.632 17:10:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:40.632 17:10:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:40.632 17:10:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:40.632 17:10:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:40.632 17:10:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:40.632 17:10:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:40.891 17:10:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:40.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:06:40.891 00:06:40.891 --- 10.0.0.2 ping statistics --- 00:06:40.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.891 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:40.891 17:10:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:40.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:40.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:06:40.891 00:06:40.891 --- 10.0.0.3 ping statistics --- 00:06:40.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.891 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:06:40.891 17:10:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:40.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:06:40.891 00:06:40.891 --- 10.0.0.1 ping statistics --- 00:06:40.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.891 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:06:40.891 17:10:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.891 17:10:10 -- nvmf/common.sh@422 -- # return 0 00:06:40.891 17:10:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:40.891 17:10:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.891 17:10:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:40.891 17:10:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:40.891 17:10:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.891 17:10:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:40.891 17:10:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:40.891 17:10:10 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:40.891 17:10:10 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:40.891 17:10:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:40.891 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.891 17:10:10 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:40.891 17:10:10 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:40.891 17:10:10 -- target/nvmf_example.sh@34 -- # nvmfpid=64983 00:06:40.891 17:10:10 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:40.891 17:10:10 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:40.891 17:10:10 -- target/nvmf_example.sh@36 -- # waitforlisten 64983 00:06:40.891 17:10:10 -- common/autotest_common.sh@817 -- # '[' -z 64983 ']' 00:06:40.891 17:10:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.891 17:10:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:40.891 17:10:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.891 17:10:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:40.891 17:10:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:41.828 17:10:11 -- common/autotest_common.sh@850 -- # return 0 00:06:41.828 17:10:11 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:41.828 17:10:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:41.828 17:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.828 17:10:11 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:41.828 17:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.828 17:10:11 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:41.828 17:10:11 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:41.828 17:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.828 17:10:11 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:41.828 17:10:11 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:41.828 17:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.828 17:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:41.828 17:10:11 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:41.828 17:10:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:41.828 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:06:42.086 17:10:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:42.086 17:10:11 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:42.086 17:10:11 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:52.056 Initializing NVMe Controllers 00:06:52.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:52.057 Initialization complete. Launching workers. 00:06:52.057 ======================================================== 00:06:52.057 Latency(us) 00:06:52.057 Device Information : IOPS MiB/s Average min max 00:06:52.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15857.27 61.94 4035.63 538.62 27159.58 00:06:52.057 ======================================================== 00:06:52.057 Total : 15857.27 61.94 4035.63 538.62 27159.58 00:06:52.057 00:06:52.057 17:10:22 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:52.057 17:10:22 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:52.057 17:10:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:52.057 17:10:22 -- nvmf/common.sh@117 -- # sync 00:06:52.315 17:10:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.315 17:10:22 -- nvmf/common.sh@120 -- # set +e 00:06:52.315 17:10:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.315 17:10:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.315 rmmod nvme_tcp 00:06:52.315 rmmod nvme_fabrics 00:06:52.315 rmmod nvme_keyring 00:06:52.315 17:10:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.315 17:10:22 -- nvmf/common.sh@124 -- # set -e 00:06:52.315 17:10:22 -- nvmf/common.sh@125 -- # return 0 00:06:52.315 17:10:22 -- nvmf/common.sh@478 -- # '[' -n 64983 ']' 00:06:52.315 17:10:22 -- nvmf/common.sh@479 -- # killprocess 64983 00:06:52.315 17:10:22 -- common/autotest_common.sh@936 -- # '[' -z 64983 ']' 00:06:52.315 17:10:22 -- common/autotest_common.sh@940 -- # kill -0 64983 00:06:52.315 17:10:22 -- common/autotest_common.sh@941 -- # uname 00:06:52.315 17:10:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.315 17:10:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64983 00:06:52.315 17:10:22 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:52.315 killing process with pid 64983 00:06:52.315 17:10:22 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:52.315 17:10:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64983' 00:06:52.315 17:10:22 -- common/autotest_common.sh@955 -- # kill 64983 00:06:52.315 17:10:22 -- common/autotest_common.sh@960 -- # wait 64983 00:06:52.315 nvmf threads initialize successfully 00:06:52.315 bdev subsystem init successfully 00:06:52.315 created a nvmf target service 00:06:52.315 create targets's poll groups done 00:06:52.315 all subsystems of target started 00:06:52.315 nvmf target is running 00:06:52.315 all subsystems of target stopped 00:06:52.315 destroy targets's poll groups done 00:06:52.315 destroyed the nvmf target service 00:06:52.315 bdev subsystem finish successfully 00:06:52.315 nvmf threads destroy successfully 00:06:52.315 17:10:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:52.315 17:10:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:52.315 17:10:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:52.315 17:10:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.315 17:10:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.315 17:10:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.315 17:10:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.315 17:10:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.575 17:10:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:52.575 17:10:22 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:52.575 17:10:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:52.575 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.575 ************************************ 00:06:52.575 END TEST nvmf_example 00:06:52.575 ************************************ 00:06:52.575 00:06:52.575 real 0m12.215s 00:06:52.575 user 0m44.042s 00:06:52.575 sys 0m1.924s 00:06:52.575 17:10:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.575 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.575 17:10:22 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:52.575 17:10:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:52.575 17:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.575 17:10:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.575 ************************************ 00:06:52.575 START TEST nvmf_filesystem 00:06:52.575 ************************************ 00:06:52.575 17:10:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:52.575 * Looking for test storage... 00:06:52.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.836 17:10:22 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:52.836 17:10:22 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:52.836 17:10:22 -- common/autotest_common.sh@34 -- # set -e 00:06:52.836 17:10:22 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:52.836 17:10:22 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:52.836 17:10:22 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:52.836 17:10:22 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:52.836 17:10:22 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:52.836 17:10:22 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:52.836 17:10:22 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:52.836 17:10:22 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:52.836 17:10:22 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:52.836 17:10:22 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:52.836 17:10:22 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:52.836 17:10:22 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:52.836 17:10:22 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:52.836 17:10:22 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:52.836 17:10:22 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:52.836 17:10:22 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:52.836 17:10:22 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:52.836 17:10:22 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:52.836 17:10:22 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:52.836 17:10:22 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:52.836 17:10:22 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:52.836 17:10:22 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:52.836 17:10:22 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:52.836 17:10:22 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:52.836 17:10:22 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:52.836 17:10:22 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:52.836 17:10:22 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:52.836 17:10:22 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:52.836 17:10:22 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:52.836 17:10:22 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:52.836 17:10:22 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:52.836 17:10:22 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:52.836 17:10:22 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:52.836 17:10:22 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:52.836 17:10:22 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:52.836 17:10:22 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:52.836 17:10:22 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:52.836 17:10:22 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:52.836 17:10:22 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:52.836 17:10:22 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:52.836 17:10:22 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:52.836 17:10:22 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:52.836 17:10:22 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:52.836 17:10:22 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:52.836 17:10:22 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:52.836 17:10:22 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:52.836 17:10:22 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:52.836 17:10:22 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:52.837 17:10:22 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:52.837 17:10:22 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:52.837 17:10:22 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:52.837 17:10:22 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:52.837 17:10:22 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:52.837 17:10:22 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:52.837 17:10:22 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:52.837 17:10:22 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:52.837 17:10:22 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:52.837 17:10:22 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:52.837 17:10:22 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:52.837 17:10:22 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:52.837 17:10:22 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:52.837 17:10:22 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:52.837 17:10:22 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:52.837 17:10:22 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:06:52.837 17:10:22 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:52.837 17:10:22 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:52.837 17:10:22 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:52.837 17:10:22 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:52.837 17:10:22 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:52.837 17:10:22 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:52.837 17:10:22 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:52.837 17:10:22 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:52.837 17:10:22 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:52.837 17:10:22 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:52.837 17:10:22 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:06:52.837 17:10:22 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:52.837 17:10:22 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:52.837 17:10:22 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:52.837 17:10:22 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:52.837 17:10:22 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:52.837 17:10:22 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:52.837 17:10:22 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:52.837 17:10:22 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:52.837 17:10:22 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:52.837 17:10:22 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:52.837 17:10:22 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:52.837 17:10:22 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:52.837 17:10:22 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:52.837 17:10:22 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:52.837 17:10:22 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:52.837 17:10:22 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:52.837 17:10:22 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:52.837 17:10:22 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:52.837 17:10:22 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:52.837 17:10:22 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:52.837 17:10:22 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:52.837 17:10:22 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:52.837 17:10:22 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:52.837 17:10:22 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:52.837 17:10:22 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:52.837 17:10:22 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:52.837 17:10:22 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:52.837 17:10:22 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:52.837 #define SPDK_CONFIG_H 00:06:52.837 #define SPDK_CONFIG_APPS 1 00:06:52.837 #define SPDK_CONFIG_ARCH native 00:06:52.837 #undef SPDK_CONFIG_ASAN 00:06:52.837 #define SPDK_CONFIG_AVAHI 1 00:06:52.837 #undef SPDK_CONFIG_CET 00:06:52.837 #define SPDK_CONFIG_COVERAGE 1 00:06:52.837 #define SPDK_CONFIG_CROSS_PREFIX 00:06:52.837 #undef SPDK_CONFIG_CRYPTO 00:06:52.837 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:52.837 #undef SPDK_CONFIG_CUSTOMOCF 00:06:52.837 #undef SPDK_CONFIG_DAOS 00:06:52.837 #define SPDK_CONFIG_DAOS_DIR 00:06:52.837 #define SPDK_CONFIG_DEBUG 1 00:06:52.837 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:52.837 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:52.837 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:52.837 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:52.837 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:52.837 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:52.837 #define SPDK_CONFIG_EXAMPLES 1 00:06:52.837 #undef SPDK_CONFIG_FC 00:06:52.837 #define SPDK_CONFIG_FC_PATH 00:06:52.837 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:52.837 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:52.837 #undef SPDK_CONFIG_FUSE 00:06:52.837 #undef SPDK_CONFIG_FUZZER 00:06:52.837 #define SPDK_CONFIG_FUZZER_LIB 00:06:52.837 #define SPDK_CONFIG_GOLANG 1 00:06:52.837 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:52.837 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:52.837 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:52.837 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:52.837 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:52.837 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:52.837 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:52.837 #define SPDK_CONFIG_IDXD 1 00:06:52.837 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:52.837 #undef SPDK_CONFIG_IPSEC_MB 00:06:52.837 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:52.837 #define SPDK_CONFIG_ISAL 1 00:06:52.837 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:52.837 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:52.837 #define SPDK_CONFIG_LIBDIR 00:06:52.837 #undef SPDK_CONFIG_LTO 00:06:52.837 #define SPDK_CONFIG_MAX_LCORES 00:06:52.837 #define SPDK_CONFIG_NVME_CUSE 1 00:06:52.837 #undef SPDK_CONFIG_OCF 00:06:52.837 #define SPDK_CONFIG_OCF_PATH 00:06:52.837 #define SPDK_CONFIG_OPENSSL_PATH 00:06:52.837 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:52.837 #define SPDK_CONFIG_PGO_DIR 00:06:52.837 #undef SPDK_CONFIG_PGO_USE 00:06:52.837 #define SPDK_CONFIG_PREFIX /usr/local 00:06:52.837 #undef SPDK_CONFIG_RAID5F 00:06:52.837 #undef SPDK_CONFIG_RBD 00:06:52.837 #define SPDK_CONFIG_RDMA 1 00:06:52.837 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:52.837 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:52.837 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:52.837 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:52.837 #define SPDK_CONFIG_SHARED 1 00:06:52.837 #undef SPDK_CONFIG_SMA 00:06:52.837 #define SPDK_CONFIG_TESTS 1 00:06:52.837 #undef SPDK_CONFIG_TSAN 00:06:52.837 #define SPDK_CONFIG_UBLK 1 00:06:52.837 #define SPDK_CONFIG_UBSAN 1 00:06:52.837 #undef SPDK_CONFIG_UNIT_TESTS 00:06:52.837 #undef SPDK_CONFIG_URING 00:06:52.837 #define SPDK_CONFIG_URING_PATH 00:06:52.837 #undef SPDK_CONFIG_URING_ZNS 00:06:52.837 #define SPDK_CONFIG_USDT 1 00:06:52.837 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:52.837 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:52.837 #define SPDK_CONFIG_VFIO_USER 1 00:06:52.837 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:52.837 #define SPDK_CONFIG_VHOST 1 00:06:52.837 #define SPDK_CONFIG_VIRTIO 1 00:06:52.837 #undef SPDK_CONFIG_VTUNE 00:06:52.837 #define SPDK_CONFIG_VTUNE_DIR 00:06:52.837 #define SPDK_CONFIG_WERROR 1 00:06:52.837 #define SPDK_CONFIG_WPDK_DIR 00:06:52.837 #undef SPDK_CONFIG_XNVME 00:06:52.837 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:52.837 17:10:22 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:52.837 17:10:22 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.837 17:10:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.837 17:10:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.837 17:10:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.837 17:10:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.837 17:10:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.837 17:10:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.837 17:10:22 -- paths/export.sh@5 -- # export PATH 00:06:52.837 17:10:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.837 17:10:22 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:52.837 17:10:22 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:52.837 17:10:22 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:52.837 17:10:22 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:52.837 17:10:22 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:52.837 17:10:22 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:52.837 17:10:22 -- pm/common@67 -- # TEST_TAG=N/A 00:06:52.837 17:10:22 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:52.837 17:10:22 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:52.837 17:10:22 -- pm/common@71 -- # uname -s 00:06:52.837 17:10:22 -- pm/common@71 -- # PM_OS=Linux 00:06:52.837 17:10:22 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:52.837 17:10:22 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:52.837 17:10:22 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:52.837 17:10:22 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:06:52.837 17:10:22 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:52.837 17:10:22 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:52.837 17:10:22 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:06:52.837 17:10:22 -- common/autotest_common.sh@57 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:52.837 17:10:22 -- common/autotest_common.sh@61 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:52.837 17:10:22 -- common/autotest_common.sh@63 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:52.837 17:10:22 -- common/autotest_common.sh@65 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:52.837 17:10:22 -- common/autotest_common.sh@67 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:52.837 17:10:22 -- common/autotest_common.sh@69 -- # : 00:06:52.837 17:10:22 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:52.837 17:10:22 -- common/autotest_common.sh@71 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:52.837 17:10:22 -- common/autotest_common.sh@73 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:52.837 17:10:22 -- common/autotest_common.sh@75 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:52.837 17:10:22 -- common/autotest_common.sh@77 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:52.837 17:10:22 -- common/autotest_common.sh@79 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:52.837 17:10:22 -- common/autotest_common.sh@81 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:52.837 17:10:22 -- common/autotest_common.sh@83 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:52.837 17:10:22 -- common/autotest_common.sh@85 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:52.837 17:10:22 -- common/autotest_common.sh@87 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:52.837 17:10:22 -- common/autotest_common.sh@89 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:52.837 17:10:22 -- common/autotest_common.sh@91 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:52.837 17:10:22 -- common/autotest_common.sh@93 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:52.837 17:10:22 -- common/autotest_common.sh@95 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:52.837 17:10:22 -- common/autotest_common.sh@97 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:52.837 17:10:22 -- common/autotest_common.sh@99 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:52.837 17:10:22 -- common/autotest_common.sh@101 -- # : tcp 00:06:52.837 17:10:22 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:52.837 17:10:22 -- common/autotest_common.sh@103 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:52.837 17:10:22 -- common/autotest_common.sh@105 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:52.837 17:10:22 -- common/autotest_common.sh@107 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:52.837 17:10:22 -- common/autotest_common.sh@109 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:52.837 17:10:22 -- common/autotest_common.sh@111 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:52.837 17:10:22 -- common/autotest_common.sh@113 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:52.837 17:10:22 -- common/autotest_common.sh@115 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:52.837 17:10:22 -- common/autotest_common.sh@117 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:52.837 17:10:22 -- common/autotest_common.sh@119 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:52.837 17:10:22 -- common/autotest_common.sh@121 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:52.837 17:10:22 -- common/autotest_common.sh@123 -- # : 00:06:52.837 17:10:22 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:52.837 17:10:22 -- common/autotest_common.sh@125 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:52.837 17:10:22 -- common/autotest_common.sh@127 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:52.837 17:10:22 -- common/autotest_common.sh@129 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:52.837 17:10:22 -- common/autotest_common.sh@131 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:52.837 17:10:22 -- common/autotest_common.sh@133 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:52.837 17:10:22 -- common/autotest_common.sh@135 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:52.837 17:10:22 -- common/autotest_common.sh@137 -- # : 00:06:52.837 17:10:22 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:52.837 17:10:22 -- common/autotest_common.sh@139 -- # : true 00:06:52.837 17:10:22 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:52.837 17:10:22 -- common/autotest_common.sh@141 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:52.837 17:10:22 -- common/autotest_common.sh@143 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:52.837 17:10:22 -- common/autotest_common.sh@145 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:52.837 17:10:22 -- common/autotest_common.sh@147 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:52.837 17:10:22 -- common/autotest_common.sh@149 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:52.837 17:10:22 -- common/autotest_common.sh@151 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:52.837 17:10:22 -- common/autotest_common.sh@153 -- # : 00:06:52.837 17:10:22 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:52.837 17:10:22 -- common/autotest_common.sh@155 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:52.837 17:10:22 -- common/autotest_common.sh@157 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:52.837 17:10:22 -- common/autotest_common.sh@159 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:52.837 17:10:22 -- common/autotest_common.sh@161 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:52.837 17:10:22 -- common/autotest_common.sh@163 -- # : 0 00:06:52.837 17:10:22 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:52.837 17:10:22 -- common/autotest_common.sh@166 -- # : 00:06:52.837 17:10:22 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:52.837 17:10:22 -- common/autotest_common.sh@168 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:52.837 17:10:22 -- common/autotest_common.sh@170 -- # : 1 00:06:52.837 17:10:22 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:52.837 17:10:22 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:52.837 17:10:22 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:52.837 17:10:22 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:52.837 17:10:22 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:52.837 17:10:22 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:52.838 17:10:22 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:52.838 17:10:22 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:52.838 17:10:22 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:52.838 17:10:22 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:52.838 17:10:22 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:52.838 17:10:22 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:52.838 17:10:22 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:52.838 17:10:22 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:52.838 17:10:22 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:52.838 17:10:22 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:52.838 17:10:22 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:52.838 17:10:22 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:52.838 17:10:22 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:52.838 17:10:22 -- common/autotest_common.sh@199 -- # cat 00:06:52.838 17:10:22 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:52.838 17:10:22 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:52.838 17:10:22 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:52.838 17:10:22 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:52.838 17:10:22 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:52.838 17:10:22 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:52.838 17:10:22 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:52.838 17:10:22 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:52.838 17:10:22 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:52.838 17:10:22 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:52.838 17:10:22 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:52.838 17:10:22 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:52.838 17:10:22 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:52.838 17:10:22 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:52.838 17:10:22 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:52.838 17:10:22 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:52.838 17:10:22 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:52.838 17:10:22 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:52.838 17:10:22 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:52.838 17:10:22 -- common/autotest_common.sh@252 -- # valgrind= 00:06:52.838 17:10:22 -- common/autotest_common.sh@258 -- # uname -s 00:06:52.838 17:10:22 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:52.838 17:10:22 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:52.838 17:10:22 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:52.838 17:10:22 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:52.838 17:10:22 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:06:52.838 17:10:22 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:52.838 17:10:22 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:52.838 17:10:22 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:52.838 17:10:22 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:52.838 17:10:22 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:52.838 17:10:22 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:52.838 17:10:22 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:52.838 17:10:22 -- common/autotest_common.sh@307 -- # [[ -z 65231 ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@307 -- # kill -0 65231 00:06:52.838 17:10:22 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:52.838 17:10:22 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:52.838 17:10:22 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:52.838 17:10:22 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:52.838 17:10:22 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:52.838 17:10:22 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:52.838 17:10:22 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:52.838 17:10:22 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.k5B3iK 00:06:52.838 17:10:22 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:52.838 17:10:22 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.k5B3iK/tests/target /tmp/spdk.k5B3iK 00:06:52.838 17:10:22 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@316 -- # df -T 00:06:52.838 17:10:22 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266613760 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=13787938816 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=5236621312 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=13787938816 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=5236621312 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267760640 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267895808 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:52.838 17:10:22 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # avails["$mount"]=97973329920 00:06:52.838 17:10:22 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:06:52.838 17:10:22 -- common/autotest_common.sh@352 -- # uses["$mount"]=1729449984 00:06:52.838 17:10:22 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:52.838 17:10:22 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:52.838 * Looking for test storage... 00:06:52.838 17:10:22 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:52.838 17:10:22 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:52.838 17:10:22 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.838 17:10:22 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:52.838 17:10:22 -- common/autotest_common.sh@361 -- # mount=/home 00:06:52.838 17:10:22 -- common/autotest_common.sh@363 -- # target_space=13787938816 00:06:52.838 17:10:22 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:52.838 17:10:22 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:52.838 17:10:22 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.838 17:10:22 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.838 17:10:22 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:52.838 17:10:22 -- common/autotest_common.sh@378 -- # return 0 00:06:52.838 17:10:22 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:52.838 17:10:22 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:52.838 17:10:22 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:52.838 17:10:22 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:52.838 17:10:22 -- common/autotest_common.sh@1673 -- # true 00:06:52.838 17:10:22 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:52.838 17:10:22 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:52.838 17:10:22 -- common/autotest_common.sh@27 -- # exec 00:06:52.838 17:10:22 -- common/autotest_common.sh@29 -- # exec 00:06:52.838 17:10:22 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:52.838 17:10:22 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:52.838 17:10:22 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:52.838 17:10:22 -- common/autotest_common.sh@18 -- # set -x 00:06:52.838 17:10:22 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:52.838 17:10:22 -- nvmf/common.sh@7 -- # uname -s 00:06:52.838 17:10:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.838 17:10:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.838 17:10:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.838 17:10:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.838 17:10:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.838 17:10:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.838 17:10:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.838 17:10:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.838 17:10:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.838 17:10:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:52.838 17:10:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:06:52.838 17:10:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.838 17:10:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.838 17:10:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:52.838 17:10:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.838 17:10:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.838 17:10:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.838 17:10:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.838 17:10:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.838 17:10:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.838 17:10:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.838 17:10:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.838 17:10:22 -- paths/export.sh@5 -- # export PATH 00:06:52.838 17:10:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.838 17:10:22 -- nvmf/common.sh@47 -- # : 0 00:06:52.838 17:10:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.838 17:10:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.838 17:10:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.838 17:10:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.838 17:10:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.838 17:10:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.838 17:10:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.838 17:10:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.838 17:10:22 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:52.838 17:10:22 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:52.838 17:10:22 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:52.838 17:10:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:52.838 17:10:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.838 17:10:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:52.838 17:10:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:52.838 17:10:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:52.838 17:10:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.838 17:10:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.838 17:10:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.838 17:10:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:06:52.838 17:10:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:06:52.838 17:10:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.838 17:10:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.838 17:10:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:52.838 17:10:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:52.838 17:10:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:52.838 17:10:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:52.838 17:10:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:52.839 17:10:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.839 17:10:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:52.839 17:10:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:52.839 17:10:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:52.839 17:10:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:52.839 17:10:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:52.839 17:10:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:52.839 Cannot find device "nvmf_tgt_br" 00:06:52.839 17:10:22 -- nvmf/common.sh@155 -- # true 00:06:52.839 17:10:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:52.839 Cannot find device "nvmf_tgt_br2" 00:06:52.839 17:10:22 -- nvmf/common.sh@156 -- # true 00:06:52.839 17:10:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:52.839 17:10:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:52.839 Cannot find device "nvmf_tgt_br" 00:06:52.839 17:10:22 -- nvmf/common.sh@158 -- # true 00:06:52.839 17:10:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:52.839 Cannot find device "nvmf_tgt_br2" 00:06:52.839 17:10:22 -- nvmf/common.sh@159 -- # true 00:06:52.839 17:10:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:53.098 17:10:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:53.098 17:10:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:53.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.098 17:10:22 -- nvmf/common.sh@162 -- # true 00:06:53.098 17:10:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:53.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.098 17:10:22 -- nvmf/common.sh@163 -- # true 00:06:53.098 17:10:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:53.098 17:10:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:53.098 17:10:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:53.098 17:10:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:53.098 17:10:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:53.098 17:10:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:53.098 17:10:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:53.098 17:10:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:53.098 17:10:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:53.098 17:10:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:53.098 17:10:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:53.098 17:10:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:53.098 17:10:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:53.098 17:10:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:53.098 17:10:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:53.098 17:10:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:53.098 17:10:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:53.098 17:10:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:53.098 17:10:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:53.098 17:10:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:53.098 17:10:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:53.098 17:10:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:53.098 17:10:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:53.098 17:10:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:53.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:06:53.098 00:06:53.098 --- 10.0.0.2 ping statistics --- 00:06:53.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.098 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:53.098 17:10:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:53.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:53.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:06:53.098 00:06:53.098 --- 10.0.0.3 ping statistics --- 00:06:53.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.098 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:53.098 17:10:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:53.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:53.098 00:06:53.098 --- 10.0.0.1 ping statistics --- 00:06:53.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.098 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:53.098 17:10:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.098 17:10:23 -- nvmf/common.sh@422 -- # return 0 00:06:53.098 17:10:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:53.098 17:10:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.098 17:10:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:53.098 17:10:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:53.098 17:10:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.098 17:10:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:53.098 17:10:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:53.098 17:10:23 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:53.098 17:10:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:53.098 17:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.098 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 ************************************ 00:06:53.357 START TEST nvmf_filesystem_no_in_capsule 00:06:53.357 ************************************ 00:06:53.357 17:10:23 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:53.357 17:10:23 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:53.357 17:10:23 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:53.357 17:10:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:53.357 17:10:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.357 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 17:10:23 -- nvmf/common.sh@470 -- # nvmfpid=65396 00:06:53.357 17:10:23 -- nvmf/common.sh@471 -- # waitforlisten 65396 00:06:53.357 17:10:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.357 17:10:23 -- common/autotest_common.sh@817 -- # '[' -z 65396 ']' 00:06:53.357 17:10:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.357 17:10:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.357 17:10:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.357 17:10:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.357 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.357 [2024-04-25 17:10:23.177584] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:06:53.357 [2024-04-25 17:10:23.177667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.357 [2024-04-25 17:10:23.315909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.615 [2024-04-25 17:10:23.386851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.615 [2024-04-25 17:10:23.386906] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.615 [2024-04-25 17:10:23.386920] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.615 [2024-04-25 17:10:23.386930] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.615 [2024-04-25 17:10:23.386938] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.615 [2024-04-25 17:10:23.387106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.615 [2024-04-25 17:10:23.387317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.615 [2024-04-25 17:10:23.387819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.615 [2024-04-25 17:10:23.387834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.183 17:10:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.183 17:10:24 -- common/autotest_common.sh@850 -- # return 0 00:06:54.183 17:10:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:54.183 17:10:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:54.183 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 17:10:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.442 17:10:24 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:54.442 17:10:24 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 [2024-04-25 17:10:24.192583] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 [2024-04-25 17:10:24.316635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:54.442 17:10:24 -- common/autotest_common.sh@1366 -- # local bs 00:06:54.442 17:10:24 -- common/autotest_common.sh@1367 -- # local nb 00:06:54.442 17:10:24 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:54.442 17:10:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:54.442 17:10:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.442 17:10:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:54.442 17:10:24 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:54.442 { 00:06:54.442 "aliases": [ 00:06:54.442 "5abf8c0f-5fbd-4f14-b001-cfda7e600d50" 00:06:54.442 ], 00:06:54.442 "assigned_rate_limits": { 00:06:54.442 "r_mbytes_per_sec": 0, 00:06:54.442 "rw_ios_per_sec": 0, 00:06:54.442 "rw_mbytes_per_sec": 0, 00:06:54.442 "w_mbytes_per_sec": 0 00:06:54.442 }, 00:06:54.442 "block_size": 512, 00:06:54.442 "claim_type": "exclusive_write", 00:06:54.442 "claimed": true, 00:06:54.442 "driver_specific": {}, 00:06:54.442 "memory_domains": [ 00:06:54.442 { 00:06:54.442 "dma_device_id": "system", 00:06:54.442 "dma_device_type": 1 00:06:54.442 }, 00:06:54.442 { 00:06:54.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.442 "dma_device_type": 2 00:06:54.442 } 00:06:54.442 ], 00:06:54.442 "name": "Malloc1", 00:06:54.442 "num_blocks": 1048576, 00:06:54.442 "product_name": "Malloc disk", 00:06:54.442 "supported_io_types": { 00:06:54.442 "abort": true, 00:06:54.442 "compare": false, 00:06:54.442 "compare_and_write": false, 00:06:54.442 "flush": true, 00:06:54.442 "nvme_admin": false, 00:06:54.442 "nvme_io": false, 00:06:54.442 "read": true, 00:06:54.442 "reset": true, 00:06:54.442 "unmap": true, 00:06:54.442 "write": true, 00:06:54.442 "write_zeroes": true 00:06:54.442 }, 00:06:54.442 "uuid": "5abf8c0f-5fbd-4f14-b001-cfda7e600d50", 00:06:54.442 "zoned": false 00:06:54.442 } 00:06:54.442 ]' 00:06:54.442 17:10:24 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:54.442 17:10:24 -- common/autotest_common.sh@1369 -- # bs=512 00:06:54.442 17:10:24 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:54.703 17:10:24 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:54.703 17:10:24 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:54.703 17:10:24 -- common/autotest_common.sh@1374 -- # echo 512 00:06:54.703 17:10:24 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:54.703 17:10:24 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.703 17:10:24 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.703 17:10:24 -- common/autotest_common.sh@1184 -- # local i=0 00:06:54.703 17:10:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.703 17:10:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:54.703 17:10:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:57.246 17:10:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:57.246 17:10:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:57.246 17:10:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:57.246 17:10:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:57.246 17:10:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:57.246 17:10:26 -- common/autotest_common.sh@1194 -- # return 0 00:06:57.246 17:10:26 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:57.246 17:10:26 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:57.246 17:10:26 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:57.246 17:10:26 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:57.246 17:10:26 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:57.246 17:10:26 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:57.246 17:10:26 -- setup/common.sh@80 -- # echo 536870912 00:06:57.246 17:10:26 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:57.246 17:10:26 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:57.246 17:10:26 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:57.246 17:10:26 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:57.246 17:10:26 -- target/filesystem.sh@69 -- # partprobe 00:06:57.246 17:10:26 -- target/filesystem.sh@70 -- # sleep 1 00:06:58.181 17:10:27 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:58.181 17:10:27 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:58.181 17:10:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:58.181 17:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.181 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:06:58.181 ************************************ 00:06:58.181 START TEST filesystem_ext4 00:06:58.181 ************************************ 00:06:58.181 17:10:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:58.181 17:10:27 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:58.181 17:10:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.181 17:10:27 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:58.181 17:10:27 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:58.181 17:10:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:58.181 17:10:27 -- common/autotest_common.sh@914 -- # local i=0 00:06:58.181 17:10:27 -- common/autotest_common.sh@915 -- # local force 00:06:58.181 17:10:27 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:58.181 17:10:27 -- common/autotest_common.sh@918 -- # force=-F 00:06:58.181 17:10:27 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:58.181 mke2fs 1.46.5 (30-Dec-2021) 00:06:58.181 Discarding device blocks: 0/522240 done 00:06:58.181 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:58.181 Filesystem UUID: 235598d5-b380-439d-90f1-942dce94c225 00:06:58.181 Superblock backups stored on blocks: 00:06:58.181 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:58.181 00:06:58.181 Allocating group tables: 0/64 done 00:06:58.181 Writing inode tables: 0/64 done 00:06:58.181 Creating journal (8192 blocks): done 00:06:58.181 Writing superblocks and filesystem accounting information: 0/64 done 00:06:58.181 00:06:58.182 17:10:27 -- common/autotest_common.sh@931 -- # return 0 00:06:58.182 17:10:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.182 17:10:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.182 17:10:28 -- target/filesystem.sh@25 -- # sync 00:06:58.182 17:10:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.182 17:10:28 -- target/filesystem.sh@27 -- # sync 00:06:58.182 17:10:28 -- target/filesystem.sh@29 -- # i=0 00:06:58.182 17:10:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.182 17:10:28 -- target/filesystem.sh@37 -- # kill -0 65396 00:06:58.182 17:10:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.182 17:10:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.440 17:10:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.441 17:10:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.441 00:06:58.441 real 0m0.303s 00:06:58.441 user 0m0.023s 00:06:58.441 sys 0m0.056s 00:06:58.441 17:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.441 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.441 ************************************ 00:06:58.441 END TEST filesystem_ext4 00:06:58.441 ************************************ 00:06:58.441 17:10:28 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:58.441 17:10:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:58.441 17:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.441 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.441 ************************************ 00:06:58.441 START TEST filesystem_btrfs 00:06:58.441 ************************************ 00:06:58.441 17:10:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:58.441 17:10:28 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:58.441 17:10:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.441 17:10:28 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:58.441 17:10:28 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:58.441 17:10:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:58.441 17:10:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:58.441 17:10:28 -- common/autotest_common.sh@915 -- # local force 00:06:58.441 17:10:28 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:58.441 17:10:28 -- common/autotest_common.sh@920 -- # force=-f 00:06:58.441 17:10:28 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:58.441 btrfs-progs v6.6.2 00:06:58.441 See https://btrfs.readthedocs.io for more information. 00:06:58.441 00:06:58.441 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:58.441 NOTE: several default settings have changed in version 5.15, please make sure 00:06:58.441 this does not affect your deployments: 00:06:58.441 - DUP for metadata (-m dup) 00:06:58.441 - enabled no-holes (-O no-holes) 00:06:58.441 - enabled free-space-tree (-R free-space-tree) 00:06:58.441 00:06:58.441 Label: (null) 00:06:58.441 UUID: 2189650f-8adc-4c8a-a928-a562f982b6f4 00:06:58.441 Node size: 16384 00:06:58.441 Sector size: 4096 00:06:58.441 Filesystem size: 510.00MiB 00:06:58.441 Block group profiles: 00:06:58.441 Data: single 8.00MiB 00:06:58.441 Metadata: DUP 32.00MiB 00:06:58.441 System: DUP 8.00MiB 00:06:58.441 SSD detected: yes 00:06:58.441 Zoned device: no 00:06:58.441 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:58.441 Runtime features: free-space-tree 00:06:58.441 Checksum: crc32c 00:06:58.441 Number of devices: 1 00:06:58.441 Devices: 00:06:58.441 ID SIZE PATH 00:06:58.441 1 510.00MiB /dev/nvme0n1p1 00:06:58.441 00:06:58.441 17:10:28 -- common/autotest_common.sh@931 -- # return 0 00:06:58.441 17:10:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:58.700 17:10:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:58.700 17:10:28 -- target/filesystem.sh@25 -- # sync 00:06:58.700 17:10:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:58.700 17:10:28 -- target/filesystem.sh@27 -- # sync 00:06:58.700 17:10:28 -- target/filesystem.sh@29 -- # i=0 00:06:58.700 17:10:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:58.700 17:10:28 -- target/filesystem.sh@37 -- # kill -0 65396 00:06:58.700 17:10:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:58.700 17:10:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:58.700 17:10:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:58.700 17:10:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:58.700 00:06:58.700 real 0m0.233s 00:06:58.700 user 0m0.019s 00:06:58.700 sys 0m0.069s 00:06:58.700 17:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.700 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 ************************************ 00:06:58.700 END TEST filesystem_btrfs 00:06:58.700 ************************************ 00:06:58.700 17:10:28 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:58.700 17:10:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:58.700 17:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.700 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.700 ************************************ 00:06:58.700 START TEST filesystem_xfs 00:06:58.700 ************************************ 00:06:58.700 17:10:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:58.700 17:10:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:58.700 17:10:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:58.700 17:10:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:58.700 17:10:28 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:58.700 17:10:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:58.700 17:10:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:58.700 17:10:28 -- common/autotest_common.sh@915 -- # local force 00:06:58.700 17:10:28 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:58.700 17:10:28 -- common/autotest_common.sh@920 -- # force=-f 00:06:58.700 17:10:28 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:58.700 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:58.700 = sectsz=512 attr=2, projid32bit=1 00:06:58.700 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:58.700 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:58.700 data = bsize=4096 blocks=130560, imaxpct=25 00:06:58.700 = sunit=0 swidth=0 blks 00:06:58.700 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:58.700 log =internal log bsize=4096 blocks=16384, version=2 00:06:58.700 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:58.700 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:59.636 Discarding blocks...Done. 00:06:59.636 17:10:29 -- common/autotest_common.sh@931 -- # return 0 00:06:59.636 17:10:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.168 17:10:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.168 17:10:31 -- target/filesystem.sh@25 -- # sync 00:07:02.168 17:10:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.168 17:10:31 -- target/filesystem.sh@27 -- # sync 00:07:02.168 17:10:31 -- target/filesystem.sh@29 -- # i=0 00:07:02.168 17:10:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.168 17:10:31 -- target/filesystem.sh@37 -- # kill -0 65396 00:07:02.168 17:10:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.168 17:10:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.168 17:10:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.168 17:10:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.168 00:07:02.168 real 0m3.104s 00:07:02.168 user 0m0.024s 00:07:02.168 sys 0m0.057s 00:07:02.168 17:10:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.168 17:10:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.168 ************************************ 00:07:02.168 END TEST filesystem_xfs 00:07:02.168 ************************************ 00:07:02.168 17:10:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:02.168 17:10:31 -- target/filesystem.sh@93 -- # sync 00:07:02.168 17:10:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:02.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.168 17:10:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:02.168 17:10:31 -- common/autotest_common.sh@1205 -- # local i=0 00:07:02.168 17:10:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:02.168 17:10:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.168 17:10:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:02.168 17:10:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:02.168 17:10:31 -- common/autotest_common.sh@1217 -- # return 0 00:07:02.168 17:10:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.168 17:10:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:02.168 17:10:31 -- common/autotest_common.sh@10 -- # set +x 00:07:02.168 17:10:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:02.168 17:10:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:02.168 17:10:31 -- target/filesystem.sh@101 -- # killprocess 65396 00:07:02.168 17:10:31 -- common/autotest_common.sh@936 -- # '[' -z 65396 ']' 00:07:02.168 17:10:31 -- common/autotest_common.sh@940 -- # kill -0 65396 00:07:02.168 17:10:31 -- common/autotest_common.sh@941 -- # uname 00:07:02.168 17:10:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.168 17:10:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65396 00:07:02.168 killing process with pid 65396 00:07:02.168 17:10:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.168 17:10:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.168 17:10:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65396' 00:07:02.168 17:10:31 -- common/autotest_common.sh@955 -- # kill 65396 00:07:02.168 17:10:31 -- common/autotest_common.sh@960 -- # wait 65396 00:07:02.427 17:10:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:02.427 00:07:02.427 real 0m9.046s 00:07:02.427 user 0m34.255s 00:07:02.427 sys 0m1.699s 00:07:02.427 17:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.427 ************************************ 00:07:02.427 END TEST nvmf_filesystem_no_in_capsule 00:07:02.427 ************************************ 00:07:02.427 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 17:10:32 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:02.427 17:10:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:02.427 17:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.427 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 ************************************ 00:07:02.427 START TEST nvmf_filesystem_in_capsule 00:07:02.427 ************************************ 00:07:02.427 17:10:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:02.427 17:10:32 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:02.427 17:10:32 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:02.427 17:10:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:02.427 17:10:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:02.427 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.427 17:10:32 -- nvmf/common.sh@470 -- # nvmfpid=65726 00:07:02.427 17:10:32 -- nvmf/common.sh@471 -- # waitforlisten 65726 00:07:02.427 17:10:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.427 17:10:32 -- common/autotest_common.sh@817 -- # '[' -z 65726 ']' 00:07:02.427 17:10:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.427 17:10:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:02.427 17:10:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.427 17:10:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:02.427 17:10:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.427 [2024-04-25 17:10:32.320170] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:02.427 [2024-04-25 17:10:32.320264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.685 [2024-04-25 17:10:32.453067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.685 [2024-04-25 17:10:32.508597] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.685 [2024-04-25 17:10:32.508904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.685 [2024-04-25 17:10:32.509058] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.685 [2024-04-25 17:10:32.509188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.685 [2024-04-25 17:10:32.509237] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.685 [2024-04-25 17:10:32.509458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.685 [2024-04-25 17:10:32.509513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.685 [2024-04-25 17:10:32.509635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.685 [2024-04-25 17:10:32.509638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.624 17:10:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:03.624 17:10:33 -- common/autotest_common.sh@850 -- # return 0 00:07:03.624 17:10:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:03.624 17:10:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 17:10:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.624 17:10:33 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:03.624 17:10:33 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 [2024-04-25 17:10:33.314291] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 [2024-04-25 17:10:33.442532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:03.624 17:10:33 -- common/autotest_common.sh@1366 -- # local bs 00:07:03.624 17:10:33 -- common/autotest_common.sh@1367 -- # local nb 00:07:03.624 17:10:33 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:03.624 17:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:03.624 17:10:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.624 17:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:03.624 17:10:33 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:03.624 { 00:07:03.624 "aliases": [ 00:07:03.624 "6e56e0cf-8fbf-4be0-8618-29cd26caa859" 00:07:03.624 ], 00:07:03.624 "assigned_rate_limits": { 00:07:03.624 "r_mbytes_per_sec": 0, 00:07:03.624 "rw_ios_per_sec": 0, 00:07:03.624 "rw_mbytes_per_sec": 0, 00:07:03.624 "w_mbytes_per_sec": 0 00:07:03.624 }, 00:07:03.624 "block_size": 512, 00:07:03.624 "claim_type": "exclusive_write", 00:07:03.624 "claimed": true, 00:07:03.624 "driver_specific": {}, 00:07:03.624 "memory_domains": [ 00:07:03.624 { 00:07:03.624 "dma_device_id": "system", 00:07:03.624 "dma_device_type": 1 00:07:03.624 }, 00:07:03.624 { 00:07:03.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.624 "dma_device_type": 2 00:07:03.624 } 00:07:03.624 ], 00:07:03.624 "name": "Malloc1", 00:07:03.624 "num_blocks": 1048576, 00:07:03.624 "product_name": "Malloc disk", 00:07:03.624 "supported_io_types": { 00:07:03.624 "abort": true, 00:07:03.624 "compare": false, 00:07:03.624 "compare_and_write": false, 00:07:03.624 "flush": true, 00:07:03.624 "nvme_admin": false, 00:07:03.624 "nvme_io": false, 00:07:03.624 "read": true, 00:07:03.624 "reset": true, 00:07:03.624 "unmap": true, 00:07:03.624 "write": true, 00:07:03.624 "write_zeroes": true 00:07:03.624 }, 00:07:03.624 "uuid": "6e56e0cf-8fbf-4be0-8618-29cd26caa859", 00:07:03.624 "zoned": false 00:07:03.624 } 00:07:03.624 ]' 00:07:03.624 17:10:33 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:03.624 17:10:33 -- common/autotest_common.sh@1369 -- # bs=512 00:07:03.624 17:10:33 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:03.624 17:10:33 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:03.624 17:10:33 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:03.624 17:10:33 -- common/autotest_common.sh@1374 -- # echo 512 00:07:03.624 17:10:33 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:03.624 17:10:33 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.883 17:10:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.883 17:10:33 -- common/autotest_common.sh@1184 -- # local i=0 00:07:03.883 17:10:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.883 17:10:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:03.883 17:10:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:05.784 17:10:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:05.784 17:10:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:05.784 17:10:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.784 17:10:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:05.784 17:10:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.784 17:10:35 -- common/autotest_common.sh@1194 -- # return 0 00:07:05.784 17:10:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:05.784 17:10:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:06.042 17:10:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:06.042 17:10:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:06.042 17:10:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:06.042 17:10:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:06.042 17:10:35 -- setup/common.sh@80 -- # echo 536870912 00:07:06.042 17:10:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:06.042 17:10:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:06.042 17:10:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:06.042 17:10:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:06.042 17:10:35 -- target/filesystem.sh@69 -- # partprobe 00:07:06.042 17:10:35 -- target/filesystem.sh@70 -- # sleep 1 00:07:06.976 17:10:36 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:06.976 17:10:36 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:06.976 17:10:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:06.976 17:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.976 17:10:36 -- common/autotest_common.sh@10 -- # set +x 00:07:07.248 ************************************ 00:07:07.248 START TEST filesystem_in_capsule_ext4 00:07:07.248 ************************************ 00:07:07.248 17:10:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:07.248 17:10:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:07.248 17:10:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.248 17:10:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:07.248 17:10:36 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:07.248 17:10:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.248 17:10:36 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.248 17:10:36 -- common/autotest_common.sh@915 -- # local force 00:07:07.248 17:10:36 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:07.248 17:10:36 -- common/autotest_common.sh@918 -- # force=-F 00:07:07.248 17:10:36 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:07.248 mke2fs 1.46.5 (30-Dec-2021) 00:07:07.248 Discarding device blocks: 0/522240 done 00:07:07.248 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:07.248 Filesystem UUID: 11b45fc4-001b-4489-beee-138861a445a1 00:07:07.248 Superblock backups stored on blocks: 00:07:07.248 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:07.248 00:07:07.248 Allocating group tables: 0/64 done 00:07:07.248 Writing inode tables: 0/64 done 00:07:07.248 Creating journal (8192 blocks): done 00:07:07.248 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.248 00:07:07.248 17:10:37 -- common/autotest_common.sh@931 -- # return 0 00:07:07.248 17:10:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.248 17:10:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.248 17:10:37 -- target/filesystem.sh@25 -- # sync 00:07:07.517 17:10:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.517 17:10:37 -- target/filesystem.sh@27 -- # sync 00:07:07.517 17:10:37 -- target/filesystem.sh@29 -- # i=0 00:07:07.517 17:10:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.517 17:10:37 -- target/filesystem.sh@37 -- # kill -0 65726 00:07:07.517 17:10:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.517 17:10:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.517 17:10:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.517 17:10:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.517 ************************************ 00:07:07.517 END TEST filesystem_in_capsule_ext4 00:07:07.517 ************************************ 00:07:07.517 00:07:07.517 real 0m0.319s 00:07:07.517 user 0m0.027s 00:07:07.517 sys 0m0.051s 00:07:07.517 17:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.517 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.517 17:10:37 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.517 17:10:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.517 17:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.517 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.517 ************************************ 00:07:07.517 START TEST filesystem_in_capsule_btrfs 00:07:07.517 ************************************ 00:07:07.517 17:10:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.517 17:10:37 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.517 17:10:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.517 17:10:37 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.517 17:10:37 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:07.517 17:10:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.517 17:10:37 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.517 17:10:37 -- common/autotest_common.sh@915 -- # local force 00:07:07.517 17:10:37 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:07.517 17:10:37 -- common/autotest_common.sh@920 -- # force=-f 00:07:07.517 17:10:37 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:07.517 btrfs-progs v6.6.2 00:07:07.517 See https://btrfs.readthedocs.io for more information. 00:07:07.517 00:07:07.517 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:07.517 NOTE: several default settings have changed in version 5.15, please make sure 00:07:07.517 this does not affect your deployments: 00:07:07.517 - DUP for metadata (-m dup) 00:07:07.517 - enabled no-holes (-O no-holes) 00:07:07.517 - enabled free-space-tree (-R free-space-tree) 00:07:07.517 00:07:07.517 Label: (null) 00:07:07.517 UUID: 1b02f6f3-d11e-4e34-b0d1-e7279f5a3d5a 00:07:07.517 Node size: 16384 00:07:07.517 Sector size: 4096 00:07:07.517 Filesystem size: 510.00MiB 00:07:07.517 Block group profiles: 00:07:07.517 Data: single 8.00MiB 00:07:07.517 Metadata: DUP 32.00MiB 00:07:07.517 System: DUP 8.00MiB 00:07:07.517 SSD detected: yes 00:07:07.517 Zoned device: no 00:07:07.517 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:07.517 Runtime features: free-space-tree 00:07:07.517 Checksum: crc32c 00:07:07.517 Number of devices: 1 00:07:07.517 Devices: 00:07:07.517 ID SIZE PATH 00:07:07.517 1 510.00MiB /dev/nvme0n1p1 00:07:07.517 00:07:07.517 17:10:37 -- common/autotest_common.sh@931 -- # return 0 00:07:07.517 17:10:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.775 17:10:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.775 17:10:37 -- target/filesystem.sh@25 -- # sync 00:07:07.775 17:10:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.775 17:10:37 -- target/filesystem.sh@27 -- # sync 00:07:07.775 17:10:37 -- target/filesystem.sh@29 -- # i=0 00:07:07.775 17:10:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.775 17:10:37 -- target/filesystem.sh@37 -- # kill -0 65726 00:07:07.775 17:10:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.775 17:10:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.775 17:10:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.775 17:10:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.775 ************************************ 00:07:07.775 END TEST filesystem_in_capsule_btrfs 00:07:07.775 ************************************ 00:07:07.775 00:07:07.775 real 0m0.173s 00:07:07.775 user 0m0.018s 00:07:07.775 sys 0m0.064s 00:07:07.775 17:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.775 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.775 17:10:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.775 17:10:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:07.775 17:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.775 17:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.775 ************************************ 00:07:07.775 START TEST filesystem_in_capsule_xfs 00:07:07.775 ************************************ 00:07:07.775 17:10:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.775 17:10:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.775 17:10:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.775 17:10:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.775 17:10:37 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:07.775 17:10:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:07.775 17:10:37 -- common/autotest_common.sh@914 -- # local i=0 00:07:07.776 17:10:37 -- common/autotest_common.sh@915 -- # local force 00:07:07.776 17:10:37 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:07.776 17:10:37 -- common/autotest_common.sh@920 -- # force=-f 00:07:07.776 17:10:37 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.776 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.776 = sectsz=512 attr=2, projid32bit=1 00:07:07.776 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.776 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.776 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.776 = sunit=0 swidth=0 blks 00:07:07.776 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.776 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.776 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.776 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.710 Discarding blocks...Done. 00:07:08.710 17:10:38 -- common/autotest_common.sh@931 -- # return 0 00:07:08.710 17:10:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.617 17:10:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.617 17:10:40 -- target/filesystem.sh@25 -- # sync 00:07:10.617 17:10:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.617 17:10:40 -- target/filesystem.sh@27 -- # sync 00:07:10.617 17:10:40 -- target/filesystem.sh@29 -- # i=0 00:07:10.617 17:10:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.617 17:10:40 -- target/filesystem.sh@37 -- # kill -0 65726 00:07:10.617 17:10:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.617 17:10:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.617 17:10:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.617 17:10:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.617 ************************************ 00:07:10.617 END TEST filesystem_in_capsule_xfs 00:07:10.617 ************************************ 00:07:10.617 00:07:10.617 real 0m2.552s 00:07:10.617 user 0m0.029s 00:07:10.617 sys 0m0.047s 00:07:10.617 17:10:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.617 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.617 17:10:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:10.617 17:10:40 -- target/filesystem.sh@93 -- # sync 00:07:10.617 17:10:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.617 17:10:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.617 17:10:40 -- common/autotest_common.sh@1205 -- # local i=0 00:07:10.617 17:10:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:10.617 17:10:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.617 17:10:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:10.617 17:10:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.617 17:10:40 -- common/autotest_common.sh@1217 -- # return 0 00:07:10.617 17:10:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.617 17:10:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:10.617 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.617 17:10:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:10.617 17:10:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:10.617 17:10:40 -- target/filesystem.sh@101 -- # killprocess 65726 00:07:10.617 17:10:40 -- common/autotest_common.sh@936 -- # '[' -z 65726 ']' 00:07:10.617 17:10:40 -- common/autotest_common.sh@940 -- # kill -0 65726 00:07:10.617 17:10:40 -- common/autotest_common.sh@941 -- # uname 00:07:10.617 17:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.617 17:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65726 00:07:10.617 killing process with pid 65726 00:07:10.617 17:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.617 17:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.617 17:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65726' 00:07:10.617 17:10:40 -- common/autotest_common.sh@955 -- # kill 65726 00:07:10.617 17:10:40 -- common/autotest_common.sh@960 -- # wait 65726 00:07:10.876 17:10:40 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:10.876 00:07:10.876 real 0m8.404s 00:07:10.876 user 0m31.898s 00:07:10.876 sys 0m1.614s 00:07:10.876 17:10:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.876 ************************************ 00:07:10.876 END TEST nvmf_filesystem_in_capsule 00:07:10.876 ************************************ 00:07:10.876 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.876 17:10:40 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:10.876 17:10:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:10.876 17:10:40 -- nvmf/common.sh@117 -- # sync 00:07:10.876 17:10:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.876 17:10:40 -- nvmf/common.sh@120 -- # set +e 00:07:10.876 17:10:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.876 17:10:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.876 rmmod nvme_tcp 00:07:10.876 rmmod nvme_fabrics 00:07:10.876 rmmod nvme_keyring 00:07:10.876 17:10:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.876 17:10:40 -- nvmf/common.sh@124 -- # set -e 00:07:10.876 17:10:40 -- nvmf/common.sh@125 -- # return 0 00:07:10.876 17:10:40 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:10.876 17:10:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:10.876 17:10:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:10.876 17:10:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:10.876 17:10:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.876 17:10:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.876 17:10:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.876 17:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.876 17:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.876 17:10:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:10.876 00:07:10.876 real 0m18.371s 00:07:10.876 user 1m6.449s 00:07:10.876 sys 0m3.743s 00:07:10.876 17:10:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.876 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.876 ************************************ 00:07:10.876 END TEST nvmf_filesystem 00:07:10.876 ************************************ 00:07:11.135 17:10:40 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:11.135 17:10:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:11.135 17:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.136 17:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:11.136 ************************************ 00:07:11.136 START TEST nvmf_discovery 00:07:11.136 ************************************ 00:07:11.136 17:10:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:11.136 * Looking for test storage... 00:07:11.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:11.136 17:10:41 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:11.136 17:10:41 -- nvmf/common.sh@7 -- # uname -s 00:07:11.136 17:10:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.136 17:10:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.136 17:10:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.136 17:10:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.136 17:10:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.136 17:10:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.136 17:10:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.136 17:10:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.136 17:10:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.136 17:10:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:11.136 17:10:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:11.136 17:10:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.136 17:10:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.136 17:10:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:11.136 17:10:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.136 17:10:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.136 17:10:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.136 17:10:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.136 17:10:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.136 17:10:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.136 17:10:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.136 17:10:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.136 17:10:41 -- paths/export.sh@5 -- # export PATH 00:07:11.136 17:10:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.136 17:10:41 -- nvmf/common.sh@47 -- # : 0 00:07:11.136 17:10:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.136 17:10:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.136 17:10:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.136 17:10:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.136 17:10:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.136 17:10:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.136 17:10:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.136 17:10:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.136 17:10:41 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:11.136 17:10:41 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:11.136 17:10:41 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:11.136 17:10:41 -- target/discovery.sh@15 -- # hash nvme 00:07:11.136 17:10:41 -- target/discovery.sh@20 -- # nvmftestinit 00:07:11.136 17:10:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:11.136 17:10:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.136 17:10:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.136 17:10:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.136 17:10:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.136 17:10:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.136 17:10:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.136 17:10:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.136 17:10:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:11.136 17:10:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:11.136 17:10:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.136 17:10:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.136 17:10:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:11.136 17:10:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:11.136 17:10:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:11.136 17:10:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:11.136 17:10:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:11.136 17:10:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.136 17:10:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:11.136 17:10:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:11.136 17:10:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:11.136 17:10:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:11.136 17:10:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:11.136 17:10:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:11.136 Cannot find device "nvmf_tgt_br" 00:07:11.136 17:10:41 -- nvmf/common.sh@155 -- # true 00:07:11.136 17:10:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:11.395 Cannot find device "nvmf_tgt_br2" 00:07:11.395 17:10:41 -- nvmf/common.sh@156 -- # true 00:07:11.395 17:10:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:11.395 17:10:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:11.395 Cannot find device "nvmf_tgt_br" 00:07:11.395 17:10:41 -- nvmf/common.sh@158 -- # true 00:07:11.395 17:10:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:11.395 Cannot find device "nvmf_tgt_br2" 00:07:11.395 17:10:41 -- nvmf/common.sh@159 -- # true 00:07:11.395 17:10:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:11.395 17:10:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:11.395 17:10:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:11.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:11.395 17:10:41 -- nvmf/common.sh@162 -- # true 00:07:11.395 17:10:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:11.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:11.395 17:10:41 -- nvmf/common.sh@163 -- # true 00:07:11.395 17:10:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:11.395 17:10:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:11.395 17:10:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:11.395 17:10:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:11.395 17:10:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:11.395 17:10:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:11.395 17:10:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:11.395 17:10:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:11.395 17:10:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:11.395 17:10:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:11.395 17:10:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:11.395 17:10:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:11.395 17:10:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:11.395 17:10:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:11.395 17:10:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:11.395 17:10:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:11.395 17:10:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:11.395 17:10:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:11.395 17:10:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:11.653 17:10:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:11.653 17:10:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:11.653 17:10:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:11.653 17:10:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:11.653 17:10:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:11.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:11.653 00:07:11.653 --- 10.0.0.2 ping statistics --- 00:07:11.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.653 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:11.653 17:10:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:11.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:07:11.653 00:07:11.653 --- 10.0.0.3 ping statistics --- 00:07:11.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.653 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:11.653 17:10:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:11.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:07:11.653 00:07:11.653 --- 10.0.0.1 ping statistics --- 00:07:11.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.653 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:07:11.653 17:10:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.653 17:10:41 -- nvmf/common.sh@422 -- # return 0 00:07:11.653 17:10:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:11.653 17:10:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.653 17:10:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:11.653 17:10:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:11.653 17:10:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.653 17:10:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:11.653 17:10:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:11.653 17:10:41 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:11.653 17:10:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:11.653 17:10:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:11.653 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.653 17:10:41 -- nvmf/common.sh@470 -- # nvmfpid=66199 00:07:11.653 17:10:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.653 17:10:41 -- nvmf/common.sh@471 -- # waitforlisten 66199 00:07:11.653 17:10:41 -- common/autotest_common.sh@817 -- # '[' -z 66199 ']' 00:07:11.653 17:10:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.653 17:10:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.653 17:10:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.653 17:10:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.653 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.653 [2024-04-25 17:10:41.498459] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:11.653 [2024-04-25 17:10:41.498588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.911 [2024-04-25 17:10:41.641266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.911 [2024-04-25 17:10:41.719268] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.911 [2024-04-25 17:10:41.719594] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.911 [2024-04-25 17:10:41.719891] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.911 [2024-04-25 17:10:41.720117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.911 [2024-04-25 17:10:41.720230] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.911 [2024-04-25 17:10:41.720538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.911 [2024-04-25 17:10:41.720672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.911 [2024-04-25 17:10:41.720750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.911 [2024-04-25 17:10:41.720750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.911 17:10:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.911 17:10:41 -- common/autotest_common.sh@850 -- # return 0 00:07:11.911 17:10:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:11.911 17:10:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:11.911 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.911 17:10:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.911 17:10:41 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.911 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.911 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.911 [2024-04-25 17:10:41.856560] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.911 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@26 -- # seq 1 4 00:07:12.169 17:10:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:12.169 17:10:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 Null1 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 [2024-04-25 17:10:41.926257] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:12.169 17:10:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 Null2 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.169 17:10:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:12.169 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.169 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.169 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:12.170 17:10:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 Null3 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:41 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:12.170 17:10:41 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:12.170 17:10:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:41 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 Null4 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.170 17:10:42 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 4420 00:07:12.170 00:07:12.170 Discovery Log Number of Records 6, Generation counter 6 00:07:12.170 =====Discovery Log Entry 0====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: current discovery subsystem 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4420 00:07:12.170 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: explicit discovery connections, duplicate discovery information 00:07:12.170 sectype: none 00:07:12.170 =====Discovery Log Entry 1====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: nvme subsystem 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4420 00:07:12.170 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: none 00:07:12.170 sectype: none 00:07:12.170 =====Discovery Log Entry 2====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: nvme subsystem 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4420 00:07:12.170 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: none 00:07:12.170 sectype: none 00:07:12.170 =====Discovery Log Entry 3====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: nvme subsystem 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4420 00:07:12.170 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: none 00:07:12.170 sectype: none 00:07:12.170 =====Discovery Log Entry 4====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: nvme subsystem 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4420 00:07:12.170 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: none 00:07:12.170 sectype: none 00:07:12.170 =====Discovery Log Entry 5====== 00:07:12.170 trtype: tcp 00:07:12.170 adrfam: ipv4 00:07:12.170 subtype: discovery subsystem referral 00:07:12.170 treq: not required 00:07:12.170 portid: 0 00:07:12.170 trsvcid: 4430 00:07:12.170 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:12.170 traddr: 10.0.0.2 00:07:12.170 eflags: none 00:07:12.170 sectype: none 00:07:12.170 Perform nvmf subsystem discovery via RPC 00:07:12.170 17:10:42 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:12.170 17:10:42 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:12.170 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.170 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.170 [2024-04-25 17:10:42.122195] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:12.170 [ 00:07:12.170 { 00:07:12.170 "allow_any_host": true, 00:07:12.170 "hosts": [], 00:07:12.170 "listen_addresses": [ 00:07:12.170 { 00:07:12.170 "adrfam": "IPv4", 00:07:12.170 "traddr": "10.0.0.2", 00:07:12.170 "transport": "TCP", 00:07:12.170 "trsvcid": "4420", 00:07:12.170 "trtype": "TCP" 00:07:12.170 } 00:07:12.170 ], 00:07:12.170 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:12.170 "subtype": "Discovery" 00:07:12.170 }, 00:07:12.170 { 00:07:12.170 "allow_any_host": true, 00:07:12.170 "hosts": [], 00:07:12.170 "listen_addresses": [ 00:07:12.170 { 00:07:12.170 "adrfam": "IPv4", 00:07:12.170 "traddr": "10.0.0.2", 00:07:12.170 "transport": "TCP", 00:07:12.170 "trsvcid": "4420", 00:07:12.170 "trtype": "TCP" 00:07:12.170 } 00:07:12.170 ], 00:07:12.170 "max_cntlid": 65519, 00:07:12.170 "max_namespaces": 32, 00:07:12.170 "min_cntlid": 1, 00:07:12.170 "model_number": "SPDK bdev Controller", 00:07:12.170 "namespaces": [ 00:07:12.170 { 00:07:12.170 "bdev_name": "Null1", 00:07:12.170 "name": "Null1", 00:07:12.170 "nguid": "F92B7B55F04E4C9CA608ADD6E604C885", 00:07:12.170 "nsid": 1, 00:07:12.170 "uuid": "f92b7b55-f04e-4c9c-a608-add6e604c885" 00:07:12.170 } 00:07:12.170 ], 00:07:12.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:12.170 "serial_number": "SPDK00000000000001", 00:07:12.170 "subtype": "NVMe" 00:07:12.170 }, 00:07:12.170 { 00:07:12.170 "allow_any_host": true, 00:07:12.170 "hosts": [], 00:07:12.170 "listen_addresses": [ 00:07:12.170 { 00:07:12.170 "adrfam": "IPv4", 00:07:12.170 "traddr": "10.0.0.2", 00:07:12.170 "transport": "TCP", 00:07:12.170 "trsvcid": "4420", 00:07:12.170 "trtype": "TCP" 00:07:12.170 } 00:07:12.170 ], 00:07:12.170 "max_cntlid": 65519, 00:07:12.170 "max_namespaces": 32, 00:07:12.170 "min_cntlid": 1, 00:07:12.170 "model_number": "SPDK bdev Controller", 00:07:12.170 "namespaces": [ 00:07:12.170 { 00:07:12.170 "bdev_name": "Null2", 00:07:12.170 "name": "Null2", 00:07:12.170 "nguid": "E6DA223171B840A5ABBFBB5F67203FEF", 00:07:12.170 "nsid": 1, 00:07:12.170 "uuid": "e6da2231-71b8-40a5-abbf-bb5f67203fef" 00:07:12.170 } 00:07:12.170 ], 00:07:12.170 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:12.170 "serial_number": "SPDK00000000000002", 00:07:12.170 "subtype": "NVMe" 00:07:12.170 }, 00:07:12.170 { 00:07:12.170 "allow_any_host": true, 00:07:12.429 "hosts": [], 00:07:12.429 "listen_addresses": [ 00:07:12.429 { 00:07:12.429 "adrfam": "IPv4", 00:07:12.429 "traddr": "10.0.0.2", 00:07:12.429 "transport": "TCP", 00:07:12.429 "trsvcid": "4420", 00:07:12.429 "trtype": "TCP" 00:07:12.429 } 00:07:12.429 ], 00:07:12.429 "max_cntlid": 65519, 00:07:12.429 "max_namespaces": 32, 00:07:12.429 "min_cntlid": 1, 00:07:12.429 "model_number": "SPDK bdev Controller", 00:07:12.429 "namespaces": [ 00:07:12.429 { 00:07:12.429 "bdev_name": "Null3", 00:07:12.429 "name": "Null3", 00:07:12.429 "nguid": "0B5E764ED79541DE9CD8B138D8C7DE24", 00:07:12.429 "nsid": 1, 00:07:12.429 "uuid": "0b5e764e-d795-41de-9cd8-b138d8c7de24" 00:07:12.429 } 00:07:12.429 ], 00:07:12.429 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:12.429 "serial_number": "SPDK00000000000003", 00:07:12.429 "subtype": "NVMe" 00:07:12.429 }, 00:07:12.429 { 00:07:12.429 "allow_any_host": true, 00:07:12.429 "hosts": [], 00:07:12.429 "listen_addresses": [ 00:07:12.429 { 00:07:12.429 "adrfam": "IPv4", 00:07:12.429 "traddr": "10.0.0.2", 00:07:12.429 "transport": "TCP", 00:07:12.429 "trsvcid": "4420", 00:07:12.429 "trtype": "TCP" 00:07:12.429 } 00:07:12.429 ], 00:07:12.429 "max_cntlid": 65519, 00:07:12.429 "max_namespaces": 32, 00:07:12.429 "min_cntlid": 1, 00:07:12.429 "model_number": "SPDK bdev Controller", 00:07:12.429 "namespaces": [ 00:07:12.429 { 00:07:12.429 "bdev_name": "Null4", 00:07:12.429 "name": "Null4", 00:07:12.429 "nguid": "C29E923AD965413BB2CE480C77B29201", 00:07:12.429 "nsid": 1, 00:07:12.429 "uuid": "c29e923a-d965-413b-b2ce-480c77b29201" 00:07:12.429 } 00:07:12.429 ], 00:07:12.429 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:12.429 "serial_number": "SPDK00000000000004", 00:07:12.429 "subtype": "NVMe" 00:07:12.429 } 00:07:12.429 ] 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@42 -- # seq 1 4 00:07:12.429 17:10:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:12.429 17:10:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:12.429 17:10:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:12.429 17:10:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:12.429 17:10:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:12.429 17:10:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.429 17:10:42 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:12.429 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.429 17:10:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.429 17:10:42 -- target/discovery.sh@49 -- # check_bdevs= 00:07:12.429 17:10:42 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:12.429 17:10:42 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:12.430 17:10:42 -- target/discovery.sh@57 -- # nvmftestfini 00:07:12.430 17:10:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:12.430 17:10:42 -- nvmf/common.sh@117 -- # sync 00:07:12.430 17:10:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:12.430 17:10:42 -- nvmf/common.sh@120 -- # set +e 00:07:12.430 17:10:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.430 17:10:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:12.430 rmmod nvme_tcp 00:07:12.430 rmmod nvme_fabrics 00:07:12.430 rmmod nvme_keyring 00:07:12.430 17:10:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.430 17:10:42 -- nvmf/common.sh@124 -- # set -e 00:07:12.430 17:10:42 -- nvmf/common.sh@125 -- # return 0 00:07:12.430 17:10:42 -- nvmf/common.sh@478 -- # '[' -n 66199 ']' 00:07:12.430 17:10:42 -- nvmf/common.sh@479 -- # killprocess 66199 00:07:12.430 17:10:42 -- common/autotest_common.sh@936 -- # '[' -z 66199 ']' 00:07:12.430 17:10:42 -- common/autotest_common.sh@940 -- # kill -0 66199 00:07:12.430 17:10:42 -- common/autotest_common.sh@941 -- # uname 00:07:12.430 17:10:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.430 17:10:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66199 00:07:12.430 killing process with pid 66199 00:07:12.430 17:10:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.430 17:10:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.430 17:10:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66199' 00:07:12.430 17:10:42 -- common/autotest_common.sh@955 -- # kill 66199 00:07:12.430 [2024-04-25 17:10:42.387433] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:12.430 17:10:42 -- common/autotest_common.sh@960 -- # wait 66199 00:07:12.688 17:10:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:12.688 17:10:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:12.688 17:10:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:12.688 17:10:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.688 17:10:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.688 17:10:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.688 17:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.688 17:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.688 17:10:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:12.688 ************************************ 00:07:12.688 END TEST nvmf_discovery 00:07:12.688 ************************************ 00:07:12.688 00:07:12.688 real 0m1.642s 00:07:12.688 user 0m3.476s 00:07:12.688 sys 0m0.504s 00:07:12.688 17:10:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.688 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.688 17:10:42 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:12.688 17:10:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:12.688 17:10:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.688 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.947 ************************************ 00:07:12.947 START TEST nvmf_referrals 00:07:12.947 ************************************ 00:07:12.947 17:10:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:12.947 * Looking for test storage... 00:07:12.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.947 17:10:42 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.947 17:10:42 -- nvmf/common.sh@7 -- # uname -s 00:07:12.947 17:10:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.947 17:10:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.947 17:10:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.947 17:10:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.947 17:10:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.947 17:10:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.947 17:10:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.947 17:10:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.947 17:10:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.947 17:10:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.947 17:10:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:12.947 17:10:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:12.947 17:10:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.947 17:10:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.947 17:10:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.947 17:10:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.947 17:10:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.947 17:10:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.947 17:10:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.947 17:10:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.947 17:10:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.947 17:10:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.947 17:10:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.947 17:10:42 -- paths/export.sh@5 -- # export PATH 00:07:12.947 17:10:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.947 17:10:42 -- nvmf/common.sh@47 -- # : 0 00:07:12.947 17:10:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.948 17:10:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.948 17:10:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.948 17:10:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.948 17:10:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.948 17:10:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.948 17:10:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.948 17:10:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.948 17:10:42 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:12.948 17:10:42 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:12.948 17:10:42 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:12.948 17:10:42 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:12.948 17:10:42 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:12.948 17:10:42 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:12.948 17:10:42 -- target/referrals.sh@37 -- # nvmftestinit 00:07:12.948 17:10:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:12.948 17:10:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.948 17:10:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:12.948 17:10:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:12.948 17:10:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:12.948 17:10:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.948 17:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.948 17:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.948 17:10:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:12.948 17:10:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:12.948 17:10:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:12.948 17:10:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:12.948 17:10:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:12.948 17:10:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:12.948 17:10:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.948 17:10:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.948 17:10:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:12.948 17:10:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:12.948 17:10:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.948 17:10:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.948 17:10:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.948 17:10:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.948 17:10:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.948 17:10:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.948 17:10:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.948 17:10:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.948 17:10:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:12.948 17:10:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:12.948 Cannot find device "nvmf_tgt_br" 00:07:12.948 17:10:42 -- nvmf/common.sh@155 -- # true 00:07:12.948 17:10:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.948 Cannot find device "nvmf_tgt_br2" 00:07:12.948 17:10:42 -- nvmf/common.sh@156 -- # true 00:07:12.948 17:10:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:12.948 17:10:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:12.948 Cannot find device "nvmf_tgt_br" 00:07:12.948 17:10:42 -- nvmf/common.sh@158 -- # true 00:07:12.948 17:10:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:12.948 Cannot find device "nvmf_tgt_br2" 00:07:12.948 17:10:42 -- nvmf/common.sh@159 -- # true 00:07:12.948 17:10:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:13.206 17:10:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:13.206 17:10:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:13.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.206 17:10:42 -- nvmf/common.sh@162 -- # true 00:07:13.206 17:10:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:13.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:13.206 17:10:42 -- nvmf/common.sh@163 -- # true 00:07:13.206 17:10:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:13.206 17:10:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:13.206 17:10:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:13.206 17:10:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:13.206 17:10:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:13.206 17:10:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:13.206 17:10:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:13.206 17:10:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:13.206 17:10:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:13.206 17:10:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:13.206 17:10:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:13.206 17:10:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:13.206 17:10:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:13.206 17:10:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:13.206 17:10:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:13.206 17:10:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:13.206 17:10:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:13.206 17:10:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:13.206 17:10:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:13.206 17:10:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:13.206 17:10:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:13.206 17:10:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:13.206 17:10:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:13.206 17:10:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:13.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:13.206 00:07:13.206 --- 10.0.0.2 ping statistics --- 00:07:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.206 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:13.206 17:10:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:13.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:13.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:07:13.206 00:07:13.206 --- 10.0.0.3 ping statistics --- 00:07:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.206 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:13.206 17:10:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:13.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:07:13.206 00:07:13.206 --- 10.0.0.1 ping statistics --- 00:07:13.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.206 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:07:13.206 17:10:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.206 17:10:43 -- nvmf/common.sh@422 -- # return 0 00:07:13.206 17:10:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.206 17:10:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.206 17:10:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:13.206 17:10:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:13.206 17:10:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.206 17:10:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:13.206 17:10:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:13.465 17:10:43 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:13.465 17:10:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.465 17:10:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.465 17:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.465 17:10:43 -- nvmf/common.sh@470 -- # nvmfpid=66415 00:07:13.465 17:10:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.465 17:10:43 -- nvmf/common.sh@471 -- # waitforlisten 66415 00:07:13.465 17:10:43 -- common/autotest_common.sh@817 -- # '[' -z 66415 ']' 00:07:13.465 17:10:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.465 17:10:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.465 17:10:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.465 17:10:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.465 17:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.465 [2024-04-25 17:10:43.268614] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:13.465 [2024-04-25 17:10:43.268979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.465 [2024-04-25 17:10:43.411942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.723 [2024-04-25 17:10:43.491307] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.723 [2024-04-25 17:10:43.491580] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.723 [2024-04-25 17:10:43.491812] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.723 [2024-04-25 17:10:43.491974] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.723 [2024-04-25 17:10:43.492023] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.723 [2024-04-25 17:10:43.492282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.723 [2024-04-25 17:10:43.492449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.723 [2024-04-25 17:10:43.493204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.723 [2024-04-25 17:10:43.493214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.657 17:10:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.657 17:10:44 -- common/autotest_common.sh@850 -- # return 0 00:07:14.657 17:10:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.657 17:10:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.657 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.657 17:10:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.657 17:10:44 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.657 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.657 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.657 [2024-04-25 17:10:44.325786] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.657 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.657 17:10:44 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:14.657 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.657 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.657 [2024-04-25 17:10:44.358171] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:14.657 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- target/referrals.sh@48 -- # jq length 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:14.658 17:10:44 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:14.658 17:10:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:14.658 17:10:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:14.658 17:10:44 -- target/referrals.sh@21 -- # sort 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:14.658 17:10:44 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:14.658 17:10:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:14.658 17:10:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:14.658 17:10:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.658 17:10:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:14.658 17:10:44 -- target/referrals.sh@26 -- # sort 00:07:14.658 17:10:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:14.658 17:10:44 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.658 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.658 17:10:44 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:14.658 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.658 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.917 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.917 17:10:44 -- target/referrals.sh@56 -- # jq length 00:07:14.917 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:14.917 17:10:44 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:14.917 17:10:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:14.917 17:10:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # sort 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # echo 00:07:14.917 17:10:44 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:14.917 17:10:44 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:14.917 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.917 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:14.917 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.917 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:14.917 17:10:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:14.917 17:10:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.917 17:10:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.917 17:10:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:14.917 17:10:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 17:10:44 -- target/referrals.sh@21 -- # sort 00:07:14.917 17:10:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:14.917 17:10:44 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:14.917 17:10:44 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:14.917 17:10:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:14.917 17:10:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.917 17:10:44 -- target/referrals.sh@26 -- # sort 00:07:15.176 17:10:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:15.176 17:10:44 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:15.176 17:10:44 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:15.176 17:10:44 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:15.176 17:10:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:15.176 17:10:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.176 17:10:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:15.176 17:10:44 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:15.176 17:10:44 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:15.176 17:10:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:15.176 17:10:44 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:15.176 17:10:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.176 17:10:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:15.176 17:10:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:15.176 17:10:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:15.176 17:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:15.176 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.176 17:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:15.176 17:10:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:15.176 17:10:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:15.176 17:10:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:15.176 17:10:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:15.176 17:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:15.176 17:10:45 -- target/referrals.sh@21 -- # sort 00:07:15.176 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.176 17:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:15.176 17:10:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:15.176 17:10:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:15.176 17:10:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:15.176 17:10:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:15.176 17:10:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:15.176 17:10:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.176 17:10:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:15.176 17:10:45 -- target/referrals.sh@26 -- # sort 00:07:15.435 17:10:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:15.435 17:10:45 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:15.435 17:10:45 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:15.435 17:10:45 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:15.435 17:10:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:15.435 17:10:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.435 17:10:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:15.435 17:10:45 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:15.435 17:10:45 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:15.435 17:10:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:15.435 17:10:45 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:15.435 17:10:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:15.435 17:10:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.435 17:10:45 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:15.435 17:10:45 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:15.435 17:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:15.435 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.435 17:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:15.435 17:10:45 -- target/referrals.sh@82 -- # jq length 00:07:15.435 17:10:45 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:15.435 17:10:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:15.435 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.435 17:10:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:15.435 17:10:45 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:15.435 17:10:45 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:15.435 17:10:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:15.435 17:10:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:15.435 17:10:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:15.435 17:10:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:15.435 17:10:45 -- target/referrals.sh@26 -- # sort 00:07:15.694 17:10:45 -- target/referrals.sh@26 -- # echo 00:07:15.694 17:10:45 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:15.694 17:10:45 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:15.694 17:10:45 -- target/referrals.sh@86 -- # nvmftestfini 00:07:15.694 17:10:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:15.694 17:10:45 -- nvmf/common.sh@117 -- # sync 00:07:15.694 17:10:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:15.694 17:10:45 -- nvmf/common.sh@120 -- # set +e 00:07:15.694 17:10:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:15.694 17:10:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:15.694 rmmod nvme_tcp 00:07:15.694 rmmod nvme_fabrics 00:07:15.694 rmmod nvme_keyring 00:07:15.694 17:10:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:15.694 17:10:45 -- nvmf/common.sh@124 -- # set -e 00:07:15.694 17:10:45 -- nvmf/common.sh@125 -- # return 0 00:07:15.694 17:10:45 -- nvmf/common.sh@478 -- # '[' -n 66415 ']' 00:07:15.694 17:10:45 -- nvmf/common.sh@479 -- # killprocess 66415 00:07:15.694 17:10:45 -- common/autotest_common.sh@936 -- # '[' -z 66415 ']' 00:07:15.694 17:10:45 -- common/autotest_common.sh@940 -- # kill -0 66415 00:07:15.694 17:10:45 -- common/autotest_common.sh@941 -- # uname 00:07:15.694 17:10:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.694 17:10:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66415 00:07:15.694 killing process with pid 66415 00:07:15.694 17:10:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.694 17:10:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.694 17:10:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66415' 00:07:15.694 17:10:45 -- common/autotest_common.sh@955 -- # kill 66415 00:07:15.694 17:10:45 -- common/autotest_common.sh@960 -- # wait 66415 00:07:15.953 17:10:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:15.953 17:10:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:15.953 17:10:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:15.953 17:10:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.953 17:10:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.953 17:10:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.953 17:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.953 17:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.953 17:10:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:15.953 ************************************ 00:07:15.953 END TEST nvmf_referrals 00:07:15.953 ************************************ 00:07:15.953 00:07:15.953 real 0m3.051s 00:07:15.953 user 0m9.868s 00:07:15.953 sys 0m0.796s 00:07:15.953 17:10:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.953 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.953 17:10:45 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:15.953 17:10:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:15.953 17:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.953 17:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.953 ************************************ 00:07:15.953 START TEST nvmf_connect_disconnect 00:07:15.953 ************************************ 00:07:15.953 17:10:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:16.212 * Looking for test storage... 00:07:16.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:16.212 17:10:45 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:16.212 17:10:45 -- nvmf/common.sh@7 -- # uname -s 00:07:16.212 17:10:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.212 17:10:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.212 17:10:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.212 17:10:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.212 17:10:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.212 17:10:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.212 17:10:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.212 17:10:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.212 17:10:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.212 17:10:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.212 17:10:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:16.212 17:10:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:07:16.212 17:10:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.212 17:10:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.212 17:10:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:16.212 17:10:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.212 17:10:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.212 17:10:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.212 17:10:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.212 17:10:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.212 17:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.212 17:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.212 17:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.212 17:10:45 -- paths/export.sh@5 -- # export PATH 00:07:16.212 17:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.212 17:10:45 -- nvmf/common.sh@47 -- # : 0 00:07:16.212 17:10:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.212 17:10:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.212 17:10:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.212 17:10:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.212 17:10:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.212 17:10:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.212 17:10:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.213 17:10:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.213 17:10:45 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.213 17:10:45 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.213 17:10:45 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:16.213 17:10:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:16.213 17:10:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.213 17:10:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:16.213 17:10:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:16.213 17:10:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:16.213 17:10:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.213 17:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.213 17:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.213 17:10:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:07:16.213 17:10:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:07:16.213 17:10:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:07:16.213 17:10:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:07:16.213 17:10:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:07:16.213 17:10:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:07:16.213 17:10:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.213 17:10:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.213 17:10:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:16.213 17:10:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:16.213 17:10:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:16.213 17:10:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:16.213 17:10:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:16.213 17:10:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.213 17:10:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:16.213 17:10:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:16.213 17:10:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:16.213 17:10:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:16.213 17:10:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:16.213 17:10:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:16.213 Cannot find device "nvmf_tgt_br" 00:07:16.213 17:10:46 -- nvmf/common.sh@155 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.213 Cannot find device "nvmf_tgt_br2" 00:07:16.213 17:10:46 -- nvmf/common.sh@156 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:16.213 17:10:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:16.213 Cannot find device "nvmf_tgt_br" 00:07:16.213 17:10:46 -- nvmf/common.sh@158 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:16.213 Cannot find device "nvmf_tgt_br2" 00:07:16.213 17:10:46 -- nvmf/common.sh@159 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:16.213 17:10:46 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:16.213 17:10:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.213 17:10:46 -- nvmf/common.sh@162 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.213 17:10:46 -- nvmf/common.sh@163 -- # true 00:07:16.213 17:10:46 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:16.213 17:10:46 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:16.213 17:10:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:16.213 17:10:46 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:16.213 17:10:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:16.213 17:10:46 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:16.473 17:10:46 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:16.473 17:10:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:16.473 17:10:46 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:16.473 17:10:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:16.473 17:10:46 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:16.473 17:10:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:16.473 17:10:46 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:16.473 17:10:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:16.473 17:10:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:16.473 17:10:46 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:16.473 17:10:46 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:16.473 17:10:46 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:16.473 17:10:46 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:16.473 17:10:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:16.473 17:10:46 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:16.473 17:10:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:16.473 17:10:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:16.473 17:10:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:16.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:16.473 00:07:16.473 --- 10.0.0.2 ping statistics --- 00:07:16.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.473 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:16.473 17:10:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:16.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:16.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:07:16.473 00:07:16.473 --- 10.0.0.3 ping statistics --- 00:07:16.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.473 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:16.473 17:10:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:16.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:16.473 00:07:16.473 --- 10.0.0.1 ping statistics --- 00:07:16.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.473 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:16.473 17:10:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.473 17:10:46 -- nvmf/common.sh@422 -- # return 0 00:07:16.473 17:10:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:16.473 17:10:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.473 17:10:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:16.473 17:10:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:16.473 17:10:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.473 17:10:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:16.473 17:10:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:16.473 17:10:46 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:16.473 17:10:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:16.473 17:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:16.473 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.473 17:10:46 -- nvmf/common.sh@470 -- # nvmfpid=66726 00:07:16.473 17:10:46 -- nvmf/common.sh@471 -- # waitforlisten 66726 00:07:16.473 17:10:46 -- common/autotest_common.sh@817 -- # '[' -z 66726 ']' 00:07:16.473 17:10:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.473 17:10:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.473 17:10:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.473 17:10:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.473 17:10:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.473 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.473 [2024-04-25 17:10:46.405510] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:07:16.473 [2024-04-25 17:10:46.405622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.732 [2024-04-25 17:10:46.542122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.732 [2024-04-25 17:10:46.594053] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.732 [2024-04-25 17:10:46.594119] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.732 [2024-04-25 17:10:46.594128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.732 [2024-04-25 17:10:46.594136] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.732 [2024-04-25 17:10:46.594142] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.732 [2024-04-25 17:10:46.594290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.732 [2024-04-25 17:10:46.594736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.732 [2024-04-25 17:10:46.595299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.732 [2024-04-25 17:10:46.595310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.732 17:10:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:16.732 17:10:46 -- common/autotest_common.sh@850 -- # return 0 00:07:16.732 17:10:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:16.732 17:10:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:16.732 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 17:10:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:16.991 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.991 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 [2024-04-25 17:10:46.724316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.991 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:16.991 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.991 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.991 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.991 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:16.991 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.991 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.991 17:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.991 17:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.991 [2024-04-25 17:10:46.792439] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.991 17:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:16.991 17:10:46 -- target/connect_disconnect.sh@34 -- # set +x 00:07:19.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.652 17:14:29 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:59.652 17:14:29 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:59.652 17:14:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:59.652 17:14:29 -- nvmf/common.sh@117 -- # sync 00:10:59.652 17:14:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.652 17:14:29 -- nvmf/common.sh@120 -- # set +e 00:10:59.652 17:14:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.652 17:14:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.652 rmmod nvme_tcp 00:10:59.652 rmmod nvme_fabrics 00:10:59.652 rmmod nvme_keyring 00:10:59.652 17:14:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.652 17:14:29 -- nvmf/common.sh@124 -- # set -e 00:10:59.652 17:14:29 -- nvmf/common.sh@125 -- # return 0 00:10:59.652 17:14:29 -- nvmf/common.sh@478 -- # '[' -n 66726 ']' 00:10:59.652 17:14:29 -- nvmf/common.sh@479 -- # killprocess 66726 00:10:59.652 17:14:29 -- common/autotest_common.sh@936 -- # '[' -z 66726 ']' 00:10:59.652 17:14:29 -- common/autotest_common.sh@940 -- # kill -0 66726 00:10:59.652 17:14:29 -- common/autotest_common.sh@941 -- # uname 00:10:59.652 17:14:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.652 17:14:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66726 00:10:59.652 killing process with pid 66726 00:10:59.652 17:14:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.652 17:14:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.652 17:14:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66726' 00:10:59.652 17:14:29 -- common/autotest_common.sh@955 -- # kill 66726 00:10:59.652 17:14:29 -- common/autotest_common.sh@960 -- # wait 66726 00:10:59.910 17:14:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:59.910 17:14:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:59.910 17:14:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:59.910 17:14:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.910 17:14:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.910 17:14:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.910 17:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.910 17:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.910 17:14:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:59.910 00:10:59.910 real 3m43.837s 00:10:59.910 user 14m27.397s 00:10:59.910 sys 0m26.402s 00:10:59.910 17:14:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.910 ************************************ 00:10:59.910 END TEST nvmf_connect_disconnect 00:10:59.910 ************************************ 00:10:59.910 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.910 17:14:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:59.910 17:14:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:59.910 17:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.910 17:14:29 -- common/autotest_common.sh@10 -- # set +x 00:10:59.910 ************************************ 00:10:59.910 START TEST nvmf_multitarget 00:10:59.910 ************************************ 00:10:59.910 17:14:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:00.169 * Looking for test storage... 00:11:00.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.169 17:14:29 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.169 17:14:29 -- nvmf/common.sh@7 -- # uname -s 00:11:00.169 17:14:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.169 17:14:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.169 17:14:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.169 17:14:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.169 17:14:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.169 17:14:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.169 17:14:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.169 17:14:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.169 17:14:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.169 17:14:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:00.169 17:14:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:00.169 17:14:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.169 17:14:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.169 17:14:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.169 17:14:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.169 17:14:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.169 17:14:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.169 17:14:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.169 17:14:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.169 17:14:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.169 17:14:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.169 17:14:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.169 17:14:29 -- paths/export.sh@5 -- # export PATH 00:11:00.169 17:14:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.169 17:14:29 -- nvmf/common.sh@47 -- # : 0 00:11:00.169 17:14:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.169 17:14:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.169 17:14:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.169 17:14:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.169 17:14:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.169 17:14:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.169 17:14:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.169 17:14:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.169 17:14:29 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:00.169 17:14:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:00.169 17:14:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:00.169 17:14:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.169 17:14:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:00.169 17:14:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:00.169 17:14:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:00.169 17:14:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.169 17:14:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.169 17:14:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.169 17:14:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:00.169 17:14:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:00.169 17:14:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.169 17:14:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.169 17:14:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:00.169 17:14:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:00.169 17:14:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.169 17:14:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.169 17:14:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.169 17:14:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.169 17:14:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.169 17:14:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.170 17:14:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.170 17:14:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.170 17:14:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:00.170 17:14:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:00.170 Cannot find device "nvmf_tgt_br" 00:11:00.170 17:14:29 -- nvmf/common.sh@155 -- # true 00:11:00.170 17:14:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.170 Cannot find device "nvmf_tgt_br2" 00:11:00.170 17:14:29 -- nvmf/common.sh@156 -- # true 00:11:00.170 17:14:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:00.170 17:14:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:00.170 Cannot find device "nvmf_tgt_br" 00:11:00.170 17:14:29 -- nvmf/common.sh@158 -- # true 00:11:00.170 17:14:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:00.170 Cannot find device "nvmf_tgt_br2" 00:11:00.170 17:14:30 -- nvmf/common.sh@159 -- # true 00:11:00.170 17:14:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:00.170 17:14:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:00.170 17:14:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.170 17:14:30 -- nvmf/common.sh@162 -- # true 00:11:00.170 17:14:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.170 17:14:30 -- nvmf/common.sh@163 -- # true 00:11:00.170 17:14:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.170 17:14:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.170 17:14:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.170 17:14:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.170 17:14:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.170 17:14:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.429 17:14:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.429 17:14:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:00.429 17:14:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:00.429 17:14:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:00.429 17:14:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:00.429 17:14:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:00.429 17:14:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:00.429 17:14:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:00.429 17:14:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:00.429 17:14:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:00.429 17:14:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:00.429 17:14:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:00.429 17:14:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.429 17:14:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.429 17:14:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.429 17:14:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.429 17:14:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.429 17:14:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:00.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:00.429 00:11:00.429 --- 10.0.0.2 ping statistics --- 00:11:00.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.429 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:00.429 17:14:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:00.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:11:00.429 00:11:00.429 --- 10.0.0.3 ping statistics --- 00:11:00.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.429 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:00.429 17:14:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:00.429 00:11:00.429 --- 10.0.0.1 ping statistics --- 00:11:00.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.429 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:00.429 17:14:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.429 17:14:30 -- nvmf/common.sh@422 -- # return 0 00:11:00.429 17:14:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:00.429 17:14:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.429 17:14:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:00.429 17:14:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:00.429 17:14:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.429 17:14:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:00.429 17:14:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:00.429 17:14:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:00.429 17:14:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:00.429 17:14:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:00.429 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:11:00.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.429 17:14:30 -- nvmf/common.sh@470 -- # nvmfpid=70479 00:11:00.429 17:14:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.429 17:14:30 -- nvmf/common.sh@471 -- # waitforlisten 70479 00:11:00.429 17:14:30 -- common/autotest_common.sh@817 -- # '[' -z 70479 ']' 00:11:00.429 17:14:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.429 17:14:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:00.429 17:14:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.429 17:14:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:00.429 17:14:30 -- common/autotest_common.sh@10 -- # set +x 00:11:00.429 [2024-04-25 17:14:30.383409] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:00.429 [2024-04-25 17:14:30.383492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.688 [2024-04-25 17:14:30.519333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.688 [2024-04-25 17:14:30.581208] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.688 [2024-04-25 17:14:30.581517] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.688 [2024-04-25 17:14:30.581693] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.688 [2024-04-25 17:14:30.581880] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.688 [2024-04-25 17:14:30.581919] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.688 [2024-04-25 17:14:30.582175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.688 [2024-04-25 17:14:30.582318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.688 [2024-04-25 17:14:30.582400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.688 [2024-04-25 17:14:30.582400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.623 17:14:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:01.623 17:14:31 -- common/autotest_common.sh@850 -- # return 0 00:11:01.623 17:14:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:01.623 17:14:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:01.623 17:14:31 -- common/autotest_common.sh@10 -- # set +x 00:11:01.623 17:14:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.623 17:14:31 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:01.623 17:14:31 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.623 17:14:31 -- target/multitarget.sh@21 -- # jq length 00:11:01.623 17:14:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:01.623 17:14:31 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:01.623 "nvmf_tgt_1" 00:11:01.623 17:14:31 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:01.882 "nvmf_tgt_2" 00:11:01.882 17:14:31 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.882 17:14:31 -- target/multitarget.sh@28 -- # jq length 00:11:02.141 17:14:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:02.141 17:14:31 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:02.141 true 00:11:02.141 17:14:31 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:02.141 true 00:11:02.400 17:14:32 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:02.400 17:14:32 -- target/multitarget.sh@35 -- # jq length 00:11:02.400 17:14:32 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:02.400 17:14:32 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:02.400 17:14:32 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:02.400 17:14:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:02.400 17:14:32 -- nvmf/common.sh@117 -- # sync 00:11:02.400 17:14:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:02.400 17:14:32 -- nvmf/common.sh@120 -- # set +e 00:11:02.400 17:14:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:02.400 17:14:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:02.400 rmmod nvme_tcp 00:11:02.400 rmmod nvme_fabrics 00:11:02.400 rmmod nvme_keyring 00:11:02.400 17:14:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:02.400 17:14:32 -- nvmf/common.sh@124 -- # set -e 00:11:02.400 17:14:32 -- nvmf/common.sh@125 -- # return 0 00:11:02.400 17:14:32 -- nvmf/common.sh@478 -- # '[' -n 70479 ']' 00:11:02.400 17:14:32 -- nvmf/common.sh@479 -- # killprocess 70479 00:11:02.400 17:14:32 -- common/autotest_common.sh@936 -- # '[' -z 70479 ']' 00:11:02.400 17:14:32 -- common/autotest_common.sh@940 -- # kill -0 70479 00:11:02.400 17:14:32 -- common/autotest_common.sh@941 -- # uname 00:11:02.400 17:14:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.400 17:14:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70479 00:11:02.400 killing process with pid 70479 00:11:02.400 17:14:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.400 17:14:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.400 17:14:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70479' 00:11:02.400 17:14:32 -- common/autotest_common.sh@955 -- # kill 70479 00:11:02.400 17:14:32 -- common/autotest_common.sh@960 -- # wait 70479 00:11:02.659 17:14:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:02.659 17:14:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:02.659 17:14:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:02.659 17:14:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.659 17:14:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:02.659 17:14:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.659 17:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.659 17:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.659 17:14:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:02.659 ************************************ 00:11:02.659 END TEST nvmf_multitarget 00:11:02.659 ************************************ 00:11:02.659 00:11:02.659 real 0m2.779s 00:11:02.659 user 0m8.974s 00:11:02.659 sys 0m0.623s 00:11:02.659 17:14:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.659 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.918 17:14:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.918 17:14:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.918 17:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.918 17:14:32 -- common/autotest_common.sh@10 -- # set +x 00:11:02.918 ************************************ 00:11:02.918 START TEST nvmf_rpc 00:11:02.918 ************************************ 00:11:02.919 17:14:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.919 * Looking for test storage... 00:11:02.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.919 17:14:32 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.919 17:14:32 -- nvmf/common.sh@7 -- # uname -s 00:11:02.919 17:14:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.919 17:14:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.919 17:14:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.919 17:14:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.919 17:14:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.919 17:14:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.919 17:14:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.919 17:14:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.919 17:14:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.919 17:14:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:02.919 17:14:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:02.919 17:14:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.919 17:14:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.919 17:14:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.919 17:14:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.919 17:14:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.919 17:14:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.919 17:14:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.919 17:14:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.919 17:14:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.919 17:14:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.919 17:14:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.919 17:14:32 -- paths/export.sh@5 -- # export PATH 00:11:02.919 17:14:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.919 17:14:32 -- nvmf/common.sh@47 -- # : 0 00:11:02.919 17:14:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.919 17:14:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.919 17:14:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.919 17:14:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.919 17:14:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.919 17:14:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.919 17:14:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.919 17:14:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.919 17:14:32 -- target/rpc.sh@11 -- # loops=5 00:11:02.919 17:14:32 -- target/rpc.sh@23 -- # nvmftestinit 00:11:02.919 17:14:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:02.919 17:14:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.919 17:14:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:02.919 17:14:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:02.919 17:14:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:02.919 17:14:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.919 17:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.919 17:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.919 17:14:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:02.919 17:14:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:02.919 17:14:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.919 17:14:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.919 17:14:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.919 17:14:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:02.919 17:14:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.919 17:14:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.919 17:14:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.919 17:14:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.919 17:14:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.919 17:14:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.919 17:14:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.919 17:14:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.919 17:14:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:02.919 17:14:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:02.919 Cannot find device "nvmf_tgt_br" 00:11:02.919 17:14:32 -- nvmf/common.sh@155 -- # true 00:11:02.919 17:14:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.919 Cannot find device "nvmf_tgt_br2" 00:11:02.919 17:14:32 -- nvmf/common.sh@156 -- # true 00:11:02.919 17:14:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:02.919 17:14:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:02.919 Cannot find device "nvmf_tgt_br" 00:11:02.919 17:14:32 -- nvmf/common.sh@158 -- # true 00:11:02.919 17:14:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:03.178 Cannot find device "nvmf_tgt_br2" 00:11:03.178 17:14:32 -- nvmf/common.sh@159 -- # true 00:11:03.178 17:14:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:03.178 17:14:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:03.178 17:14:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.178 17:14:32 -- nvmf/common.sh@162 -- # true 00:11:03.178 17:14:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.178 17:14:32 -- nvmf/common.sh@163 -- # true 00:11:03.178 17:14:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.178 17:14:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.178 17:14:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.178 17:14:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.178 17:14:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.178 17:14:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.178 17:14:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.178 17:14:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:03.178 17:14:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:03.178 17:14:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:03.178 17:14:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:03.178 17:14:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:03.178 17:14:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:03.178 17:14:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.178 17:14:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.178 17:14:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.178 17:14:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.178 17:14:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.178 17:14:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.178 17:14:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.437 17:14:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.437 17:14:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.437 17:14:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.437 17:14:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:03.437 00:11:03.437 --- 10.0.0.2 ping statistics --- 00:11:03.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.437 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:03.437 17:14:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.437 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.437 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:03.437 00:11:03.437 --- 10.0.0.3 ping statistics --- 00:11:03.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.437 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:03.437 17:14:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:03.437 00:11:03.437 --- 10.0.0.1 ping statistics --- 00:11:03.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.437 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:03.437 17:14:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.437 17:14:33 -- nvmf/common.sh@422 -- # return 0 00:11:03.437 17:14:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:03.437 17:14:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.437 17:14:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:03.437 17:14:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:03.437 17:14:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.437 17:14:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:03.437 17:14:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:03.437 17:14:33 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:03.437 17:14:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:03.437 17:14:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.437 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:11:03.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.437 17:14:33 -- nvmf/common.sh@470 -- # nvmfpid=70715 00:11:03.437 17:14:33 -- nvmf/common.sh@471 -- # waitforlisten 70715 00:11:03.437 17:14:33 -- common/autotest_common.sh@817 -- # '[' -z 70715 ']' 00:11:03.437 17:14:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.437 17:14:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.437 17:14:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.437 17:14:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.437 17:14:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.437 17:14:33 -- common/autotest_common.sh@10 -- # set +x 00:11:03.437 [2024-04-25 17:14:33.287966] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:03.437 [2024-04-25 17:14:33.288070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.696 [2024-04-25 17:14:33.427313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.696 [2024-04-25 17:14:33.485361] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.696 [2024-04-25 17:14:33.485563] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.696 [2024-04-25 17:14:33.485727] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.696 [2024-04-25 17:14:33.485876] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.696 [2024-04-25 17:14:33.485922] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.696 [2024-04-25 17:14:33.486204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.696 [2024-04-25 17:14:33.486324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.696 [2024-04-25 17:14:33.486381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.696 [2024-04-25 17:14:33.486383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.263 17:14:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.263 17:14:34 -- common/autotest_common.sh@850 -- # return 0 00:11:04.263 17:14:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.263 17:14:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.263 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 17:14:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.523 17:14:34 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:04.523 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.523 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.523 17:14:34 -- target/rpc.sh@26 -- # stats='{ 00:11:04.523 "poll_groups": [ 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_0", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_1", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_2", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_3", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [] 00:11:04.523 } 00:11:04.523 ], 00:11:04.523 "tick_rate": 2200000000 00:11:04.523 }' 00:11:04.523 17:14:34 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:04.523 17:14:34 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:04.523 17:14:34 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:04.523 17:14:34 -- target/rpc.sh@15 -- # wc -l 00:11:04.523 17:14:34 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:04.523 17:14:34 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:04.523 17:14:34 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:04.523 17:14:34 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.523 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.523 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 [2024-04-25 17:14:34.389044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.523 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.523 17:14:34 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:04.523 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.523 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.523 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.523 17:14:34 -- target/rpc.sh@33 -- # stats='{ 00:11:04.523 "poll_groups": [ 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_0", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [ 00:11:04.523 { 00:11:04.523 "trtype": "TCP" 00:11:04.523 } 00:11:04.523 ] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_1", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [ 00:11:04.523 { 00:11:04.523 "trtype": "TCP" 00:11:04.523 } 00:11:04.523 ] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_2", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [ 00:11:04.523 { 00:11:04.523 "trtype": "TCP" 00:11:04.523 } 00:11:04.523 ] 00:11:04.523 }, 00:11:04.523 { 00:11:04.523 "admin_qpairs": 0, 00:11:04.523 "completed_nvme_io": 0, 00:11:04.523 "current_admin_qpairs": 0, 00:11:04.523 "current_io_qpairs": 0, 00:11:04.523 "io_qpairs": 0, 00:11:04.523 "name": "nvmf_tgt_poll_group_3", 00:11:04.523 "pending_bdev_io": 0, 00:11:04.523 "transports": [ 00:11:04.523 { 00:11:04.523 "trtype": "TCP" 00:11:04.523 } 00:11:04.523 ] 00:11:04.523 } 00:11:04.523 ], 00:11:04.523 "tick_rate": 2200000000 00:11:04.523 }' 00:11:04.523 17:14:34 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:04.523 17:14:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:04.523 17:14:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:04.523 17:14:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.523 17:14:34 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:04.523 17:14:34 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:04.523 17:14:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:04.523 17:14:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.523 17:14:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:04.783 17:14:34 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:04.783 17:14:34 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:04.783 17:14:34 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:04.783 17:14:34 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:04.783 17:14:34 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 Malloc1 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 [2024-04-25 17:14:34.586437] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.2 -s 4420 00:11:04.783 17:14:34 -- common/autotest_common.sh@638 -- # local es=0 00:11:04.783 17:14:34 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.2 -s 4420 00:11:04.783 17:14:34 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:04.783 17:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:04.783 17:14:34 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:04.783 17:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:04.783 17:14:34 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:04.783 17:14:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:04.783 17:14:34 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:04.783 17:14:34 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:04.783 17:14:34 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.2 -s 4420 00:11:04.783 [2024-04-25 17:14:34.610657] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7' 00:11:04.783 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:04.783 could not add new controller: failed to write to nvme-fabrics device 00:11:04.783 17:14:34 -- common/autotest_common.sh@641 -- # es=1 00:11:04.783 17:14:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:04.783 17:14:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:04.783 17:14:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:04.783 17:14:34 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:04.783 17:14:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.783 17:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.783 17:14:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.783 17:14:34 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.042 17:14:34 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.043 17:14:34 -- common/autotest_common.sh@1184 -- # local i=0 00:11:05.043 17:14:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.043 17:14:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:05.043 17:14:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:06.943 17:14:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:06.943 17:14:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:06.943 17:14:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.943 17:14:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:06.943 17:14:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.943 17:14:36 -- common/autotest_common.sh@1194 -- # return 0 00:11:06.943 17:14:36 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.943 17:14:36 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.943 17:14:36 -- common/autotest_common.sh@1205 -- # local i=0 00:11:06.943 17:14:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:06.944 17:14:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.944 17:14:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:06.944 17:14:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.944 17:14:36 -- common/autotest_common.sh@1217 -- # return 0 00:11:06.944 17:14:36 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:06.944 17:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.944 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.944 17:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.944 17:14:36 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.944 17:14:36 -- common/autotest_common.sh@638 -- # local es=0 00:11:06.944 17:14:36 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.944 17:14:36 -- common/autotest_common.sh@626 -- # local arg=nvme 00:11:06.944 17:14:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.944 17:14:36 -- common/autotest_common.sh@630 -- # type -t nvme 00:11:06.944 17:14:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.944 17:14:36 -- common/autotest_common.sh@632 -- # type -P nvme 00:11:06.944 17:14:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:06.944 17:14:36 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:11:06.944 17:14:36 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:11:06.944 17:14:36 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.944 [2024-04-25 17:14:36.901789] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7' 00:11:06.944 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:06.944 could not add new controller: failed to write to nvme-fabrics device 00:11:06.944 17:14:36 -- common/autotest_common.sh@641 -- # es=1 00:11:06.944 17:14:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:06.944 17:14:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:06.944 17:14:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:06.944 17:14:36 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:06.944 17:14:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:06.944 17:14:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.944 17:14:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.944 17:14:36 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.203 17:14:37 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.203 17:14:37 -- common/autotest_common.sh@1184 -- # local i=0 00:11:07.203 17:14:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.203 17:14:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:07.203 17:14:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:09.733 17:14:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:09.733 17:14:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:09.733 17:14:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:09.733 17:14:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.733 17:14:39 -- common/autotest_common.sh@1194 -- # return 0 00:11:09.733 17:14:39 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:09.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.733 17:14:39 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@1205 -- # local i=0 00:11:09.733 17:14:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:09.733 17:14:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:09.733 17:14:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@1217 -- # return 0 00:11:09.733 17:14:39 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.733 17:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.733 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 17:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.733 17:14:39 -- target/rpc.sh@81 -- # seq 1 5 00:11:09.733 17:14:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:09.733 17:14:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.733 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 17:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.733 17:14:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.733 17:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.733 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 [2024-04-25 17:14:39.205127] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.733 17:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.733 17:14:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:09.733 17:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.733 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 17:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.733 17:14:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:09.733 17:14:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.733 17:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 17:14:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.733 17:14:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.733 17:14:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.733 17:14:39 -- common/autotest_common.sh@1184 -- # local i=0 00:11:09.733 17:14:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.733 17:14:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:09.733 17:14:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:11.637 17:14:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:11.637 17:14:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:11.637 17:14:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.637 17:14:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:11.637 17:14:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.637 17:14:41 -- common/autotest_common.sh@1194 -- # return 0 00:11:11.637 17:14:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.637 17:14:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.637 17:14:41 -- common/autotest_common.sh@1205 -- # local i=0 00:11:11.637 17:14:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:11.637 17:14:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.637 17:14:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:11.637 17:14:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.637 17:14:41 -- common/autotest_common.sh@1217 -- # return 0 00:11:11.637 17:14:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.637 17:14:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 [2024-04-25 17:14:41.520172] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.637 17:14:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.637 17:14:41 -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 17:14:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.637 17:14:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.896 17:14:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.896 17:14:41 -- common/autotest_common.sh@1184 -- # local i=0 00:11:11.896 17:14:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.896 17:14:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:11.896 17:14:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:13.798 17:14:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:13.798 17:14:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:13.798 17:14:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.798 17:14:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:13.798 17:14:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.798 17:14:43 -- common/autotest_common.sh@1194 -- # return 0 00:11:13.798 17:14:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.798 17:14:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.798 17:14:43 -- common/autotest_common.sh@1205 -- # local i=0 00:11:13.798 17:14:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:13.798 17:14:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.057 17:14:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:14.057 17:14:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.057 17:14:43 -- common/autotest_common.sh@1217 -- # return 0 00:11:14.057 17:14:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:14.057 17:14:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 [2024-04-25 17:14:43.823785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.057 17:14:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.057 17:14:43 -- common/autotest_common.sh@10 -- # set +x 00:11:14.057 17:14:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.057 17:14:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.057 17:14:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.057 17:14:44 -- common/autotest_common.sh@1184 -- # local i=0 00:11:14.057 17:14:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.057 17:14:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:14.057 17:14:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:16.589 17:14:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:16.589 17:14:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:16.589 17:14:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.589 17:14:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:16.589 17:14:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.589 17:14:46 -- common/autotest_common.sh@1194 -- # return 0 00:11:16.590 17:14:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.590 17:14:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.590 17:14:46 -- common/autotest_common.sh@1205 -- # local i=0 00:11:16.590 17:14:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:16.590 17:14:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.590 17:14:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:16.590 17:14:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.590 17:14:46 -- common/autotest_common.sh@1217 -- # return 0 00:11:16.590 17:14:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:16.590 17:14:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 [2024-04-25 17:14:46.118987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.590 17:14:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.590 17:14:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 17:14:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.590 17:14:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.590 17:14:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.590 17:14:46 -- common/autotest_common.sh@1184 -- # local i=0 00:11:16.590 17:14:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.590 17:14:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:16.590 17:14:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:18.491 17:14:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:18.491 17:14:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:18.491 17:14:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.491 17:14:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:18.491 17:14:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.491 17:14:48 -- common/autotest_common.sh@1194 -- # return 0 00:11:18.491 17:14:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.491 17:14:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.491 17:14:48 -- common/autotest_common.sh@1205 -- # local i=0 00:11:18.491 17:14:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:18.491 17:14:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.491 17:14:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:18.491 17:14:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.491 17:14:48 -- common/autotest_common.sh@1217 -- # return 0 00:11:18.491 17:14:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.491 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.491 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.491 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.491 17:14:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.491 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.491 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.491 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.491 17:14:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.491 17:14:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.491 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.491 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.491 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.491 17:14:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.491 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.492 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.492 [2024-04-25 17:14:48.418053] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.492 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.492 17:14:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.492 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.492 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.492 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.492 17:14:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.492 17:14:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.492 17:14:48 -- common/autotest_common.sh@10 -- # set +x 00:11:18.492 17:14:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.492 17:14:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.750 17:14:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.750 17:14:48 -- common/autotest_common.sh@1184 -- # local i=0 00:11:18.750 17:14:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.750 17:14:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:18.750 17:14:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:20.652 17:14:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:20.652 17:14:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:20.652 17:14:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:20.912 17:14:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.912 17:14:50 -- common/autotest_common.sh@1194 -- # return 0 00:11:20.912 17:14:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.912 17:14:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@1205 -- # local i=0 00:11:20.912 17:14:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:20.912 17:14:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:20.912 17:14:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@1217 -- # return 0 00:11:20.912 17:14:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@99 -- # seq 1 5 00:11:20.912 17:14:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.912 17:14:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 [2024-04-25 17:14:50.721012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.912 17:14:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 [2024-04-25 17:14:50.781046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.912 17:14:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 [2024-04-25 17:14:50.829105] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.912 17:14:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:20.912 [2024-04-25 17:14:50.877138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.912 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.912 17:14:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.912 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.912 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:21.172 17:14:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 [2024-04-25 17:14:50.925195] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:21.172 17:14:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.172 17:14:50 -- common/autotest_common.sh@10 -- # set +x 00:11:21.172 17:14:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.172 17:14:50 -- target/rpc.sh@110 -- # stats='{ 00:11:21.172 "poll_groups": [ 00:11:21.172 { 00:11:21.172 "admin_qpairs": 2, 00:11:21.172 "completed_nvme_io": 166, 00:11:21.172 "current_admin_qpairs": 0, 00:11:21.172 "current_io_qpairs": 0, 00:11:21.172 "io_qpairs": 16, 00:11:21.172 "name": "nvmf_tgt_poll_group_0", 00:11:21.172 "pending_bdev_io": 0, 00:11:21.172 "transports": [ 00:11:21.172 { 00:11:21.172 "trtype": "TCP" 00:11:21.172 } 00:11:21.172 ] 00:11:21.172 }, 00:11:21.172 { 00:11:21.172 "admin_qpairs": 3, 00:11:21.172 "completed_nvme_io": 66, 00:11:21.172 "current_admin_qpairs": 0, 00:11:21.172 "current_io_qpairs": 0, 00:11:21.172 "io_qpairs": 17, 00:11:21.172 "name": "nvmf_tgt_poll_group_1", 00:11:21.172 "pending_bdev_io": 0, 00:11:21.172 "transports": [ 00:11:21.172 { 00:11:21.172 "trtype": "TCP" 00:11:21.172 } 00:11:21.172 ] 00:11:21.172 }, 00:11:21.172 { 00:11:21.172 "admin_qpairs": 1, 00:11:21.172 "completed_nvme_io": 69, 00:11:21.172 "current_admin_qpairs": 0, 00:11:21.172 "current_io_qpairs": 0, 00:11:21.172 "io_qpairs": 19, 00:11:21.172 "name": "nvmf_tgt_poll_group_2", 00:11:21.172 "pending_bdev_io": 0, 00:11:21.172 "transports": [ 00:11:21.172 { 00:11:21.172 "trtype": "TCP" 00:11:21.172 } 00:11:21.172 ] 00:11:21.172 }, 00:11:21.172 { 00:11:21.172 "admin_qpairs": 1, 00:11:21.172 "completed_nvme_io": 119, 00:11:21.172 "current_admin_qpairs": 0, 00:11:21.172 "current_io_qpairs": 0, 00:11:21.172 "io_qpairs": 18, 00:11:21.172 "name": "nvmf_tgt_poll_group_3", 00:11:21.172 "pending_bdev_io": 0, 00:11:21.172 "transports": [ 00:11:21.172 { 00:11:21.172 "trtype": "TCP" 00:11:21.172 } 00:11:21.172 ] 00:11:21.172 } 00:11:21.172 ], 00:11:21.172 "tick_rate": 2200000000 00:11:21.172 }' 00:11:21.172 17:14:50 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:21.172 17:14:50 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:21.172 17:14:50 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:21.172 17:14:50 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:21.172 17:14:51 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:21.172 17:14:51 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:21.172 17:14:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:21.172 17:14:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:21.172 17:14:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:21.172 17:14:51 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:21.172 17:14:51 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:21.172 17:14:51 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:21.172 17:14:51 -- target/rpc.sh@123 -- # nvmftestfini 00:11:21.172 17:14:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:21.172 17:14:51 -- nvmf/common.sh@117 -- # sync 00:11:21.172 17:14:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:21.172 17:14:51 -- nvmf/common.sh@120 -- # set +e 00:11:21.172 17:14:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.172 17:14:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:21.172 rmmod nvme_tcp 00:11:21.172 rmmod nvme_fabrics 00:11:21.172 rmmod nvme_keyring 00:11:21.431 17:14:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.431 17:14:51 -- nvmf/common.sh@124 -- # set -e 00:11:21.431 17:14:51 -- nvmf/common.sh@125 -- # return 0 00:11:21.431 17:14:51 -- nvmf/common.sh@478 -- # '[' -n 70715 ']' 00:11:21.431 17:14:51 -- nvmf/common.sh@479 -- # killprocess 70715 00:11:21.431 17:14:51 -- common/autotest_common.sh@936 -- # '[' -z 70715 ']' 00:11:21.431 17:14:51 -- common/autotest_common.sh@940 -- # kill -0 70715 00:11:21.431 17:14:51 -- common/autotest_common.sh@941 -- # uname 00:11:21.431 17:14:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:21.431 17:14:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70715 00:11:21.431 killing process with pid 70715 00:11:21.431 17:14:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:21.431 17:14:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:21.431 17:14:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70715' 00:11:21.431 17:14:51 -- common/autotest_common.sh@955 -- # kill 70715 00:11:21.431 17:14:51 -- common/autotest_common.sh@960 -- # wait 70715 00:11:21.690 17:14:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:21.690 17:14:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:21.690 17:14:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:21.690 17:14:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.690 17:14:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.690 17:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.690 17:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.690 17:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.690 17:14:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:21.690 00:11:21.690 real 0m18.732s 00:11:21.690 user 1m10.220s 00:11:21.690 sys 0m2.751s 00:11:21.690 17:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.690 ************************************ 00:11:21.690 END TEST nvmf_rpc 00:11:21.690 ************************************ 00:11:21.690 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.690 17:14:51 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:21.690 17:14:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.690 17:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.690 17:14:51 -- common/autotest_common.sh@10 -- # set +x 00:11:21.690 ************************************ 00:11:21.690 START TEST nvmf_invalid 00:11:21.690 ************************************ 00:11:21.690 17:14:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:21.690 * Looking for test storage... 00:11:21.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.690 17:14:51 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.690 17:14:51 -- nvmf/common.sh@7 -- # uname -s 00:11:21.690 17:14:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.690 17:14:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.690 17:14:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.690 17:14:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.690 17:14:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.690 17:14:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.690 17:14:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.690 17:14:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.690 17:14:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.690 17:14:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.690 17:14:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:21.690 17:14:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:21.690 17:14:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.690 17:14:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.690 17:14:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.690 17:14:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.690 17:14:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.690 17:14:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.690 17:14:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.690 17:14:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.690 17:14:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.690 17:14:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.690 17:14:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.690 17:14:51 -- paths/export.sh@5 -- # export PATH 00:11:21.691 17:14:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.691 17:14:51 -- nvmf/common.sh@47 -- # : 0 00:11:21.691 17:14:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.691 17:14:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.691 17:14:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.691 17:14:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.691 17:14:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.691 17:14:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.691 17:14:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.691 17:14:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.691 17:14:51 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:21.691 17:14:51 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.691 17:14:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:21.691 17:14:51 -- target/invalid.sh@14 -- # target=foobar 00:11:21.691 17:14:51 -- target/invalid.sh@16 -- # RANDOM=0 00:11:21.691 17:14:51 -- target/invalid.sh@34 -- # nvmftestinit 00:11:21.691 17:14:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:21.691 17:14:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.691 17:14:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:21.691 17:14:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:21.691 17:14:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:21.691 17:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.691 17:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.691 17:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.949 17:14:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:21.949 17:14:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:21.949 17:14:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:21.949 17:14:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:21.949 17:14:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:21.949 17:14:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:21.949 17:14:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.949 17:14:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.949 17:14:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.949 17:14:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:21.949 17:14:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.949 17:14:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.949 17:14:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.949 17:14:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.949 17:14:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.949 17:14:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.949 17:14:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.949 17:14:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.949 17:14:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:21.949 17:14:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:21.949 Cannot find device "nvmf_tgt_br" 00:11:21.949 17:14:51 -- nvmf/common.sh@155 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.949 Cannot find device "nvmf_tgt_br2" 00:11:21.949 17:14:51 -- nvmf/common.sh@156 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:21.949 17:14:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:21.949 Cannot find device "nvmf_tgt_br" 00:11:21.949 17:14:51 -- nvmf/common.sh@158 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:21.949 Cannot find device "nvmf_tgt_br2" 00:11:21.949 17:14:51 -- nvmf/common.sh@159 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:21.949 17:14:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:21.949 17:14:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.949 17:14:51 -- nvmf/common.sh@162 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.949 17:14:51 -- nvmf/common.sh@163 -- # true 00:11:21.949 17:14:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.949 17:14:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.949 17:14:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.949 17:14:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.949 17:14:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.949 17:14:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.949 17:14:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.949 17:14:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:21.949 17:14:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:21.949 17:14:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:21.949 17:14:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:21.949 17:14:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:21.949 17:14:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:22.208 17:14:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.209 17:14:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.209 17:14:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.209 17:14:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:22.209 17:14:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:22.209 17:14:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.209 17:14:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.209 17:14:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.209 17:14:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.209 17:14:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.209 17:14:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:22.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:22.209 00:11:22.209 --- 10.0.0.2 ping statistics --- 00:11:22.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.209 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:22.209 17:14:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:22.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:22.209 00:11:22.209 --- 10.0.0.3 ping statistics --- 00:11:22.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.209 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:22.209 17:14:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:22.209 00:11:22.209 --- 10.0.0.1 ping statistics --- 00:11:22.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.209 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:22.209 17:14:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.209 17:14:52 -- nvmf/common.sh@422 -- # return 0 00:11:22.209 17:14:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:22.209 17:14:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.209 17:14:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:22.209 17:14:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:22.209 17:14:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.209 17:14:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:22.209 17:14:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:22.209 17:14:52 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:22.209 17:14:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:22.209 17:14:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:22.209 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.209 17:14:52 -- nvmf/common.sh@470 -- # nvmfpid=71234 00:11:22.209 17:14:52 -- nvmf/common.sh@471 -- # waitforlisten 71234 00:11:22.209 17:14:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.209 17:14:52 -- common/autotest_common.sh@817 -- # '[' -z 71234 ']' 00:11:22.209 17:14:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.209 17:14:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:22.209 17:14:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.209 17:14:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:22.209 17:14:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.209 [2024-04-25 17:14:52.101066] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:22.209 [2024-04-25 17:14:52.101403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.468 [2024-04-25 17:14:52.241603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.468 [2024-04-25 17:14:52.293447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.468 [2024-04-25 17:14:52.293490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.468 [2024-04-25 17:14:52.293498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.468 [2024-04-25 17:14:52.293505] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.468 [2024-04-25 17:14:52.293511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.468 [2024-04-25 17:14:52.293677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.468 [2024-04-25 17:14:52.293779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.468 [2024-04-25 17:14:52.294200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.468 [2024-04-25 17:14:52.294320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.404 17:14:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:23.404 17:14:53 -- common/autotest_common.sh@850 -- # return 0 00:11:23.404 17:14:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:23.404 17:14:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:23.404 17:14:53 -- common/autotest_common.sh@10 -- # set +x 00:11:23.404 17:14:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.404 17:14:53 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:23.404 17:14:53 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2116 00:11:23.404 [2024-04-25 17:14:53.300214] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:23.404 17:14:53 -- target/invalid.sh@40 -- # out='2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2116 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:23.404 request: 00:11:23.404 { 00:11:23.404 "method": "nvmf_create_subsystem", 00:11:23.404 "params": { 00:11:23.404 "nqn": "nqn.2016-06.io.spdk:cnode2116", 00:11:23.404 "tgt_name": "foobar" 00:11:23.404 } 00:11:23.404 } 00:11:23.404 Got JSON-RPC error response 00:11:23.404 GoRPCClient: error on JSON-RPC call' 00:11:23.404 17:14:53 -- target/invalid.sh@41 -- # [[ 2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2116 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:23.404 request: 00:11:23.404 { 00:11:23.404 "method": "nvmf_create_subsystem", 00:11:23.404 "params": { 00:11:23.404 "nqn": "nqn.2016-06.io.spdk:cnode2116", 00:11:23.404 "tgt_name": "foobar" 00:11:23.404 } 00:11:23.404 } 00:11:23.404 Got JSON-RPC error response 00:11:23.404 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:23.404 17:14:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:23.404 17:14:53 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17627 00:11:23.672 [2024-04-25 17:14:53.612573] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17627: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:23.672 17:14:53 -- target/invalid.sh@45 -- # out='2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17627 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:23.672 request: 00:11:23.672 { 00:11:23.672 "method": "nvmf_create_subsystem", 00:11:23.672 "params": { 00:11:23.672 "nqn": "nqn.2016-06.io.spdk:cnode17627", 00:11:23.672 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:23.672 } 00:11:23.672 } 00:11:23.672 Got JSON-RPC error response 00:11:23.672 GoRPCClient: error on JSON-RPC call' 00:11:23.672 17:14:53 -- target/invalid.sh@46 -- # [[ 2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17627 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:23.672 request: 00:11:23.672 { 00:11:23.672 "method": "nvmf_create_subsystem", 00:11:23.672 "params": { 00:11:23.672 "nqn": "nqn.2016-06.io.spdk:cnode17627", 00:11:23.672 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:23.672 } 00:11:23.672 } 00:11:23.672 Got JSON-RPC error response 00:11:23.672 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:23.959 17:14:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:23.959 17:14:53 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2911 00:11:23.959 [2024-04-25 17:14:53.904839] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2911: invalid model number 'SPDK_Controller' 00:11:23.959 17:14:53 -- target/invalid.sh@50 -- # out='2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2911], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:23.959 request: 00:11:23.959 { 00:11:23.959 "method": "nvmf_create_subsystem", 00:11:23.959 "params": { 00:11:23.959 "nqn": "nqn.2016-06.io.spdk:cnode2911", 00:11:23.959 "model_number": "SPDK_Controller\u001f" 00:11:23.959 } 00:11:23.959 } 00:11:23.959 Got JSON-RPC error response 00:11:23.959 GoRPCClient: error on JSON-RPC call' 00:11:23.959 17:14:53 -- target/invalid.sh@51 -- # [[ 2024/04/25 17:14:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2911], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:23.959 request: 00:11:23.959 { 00:11:23.959 "method": "nvmf_create_subsystem", 00:11:23.959 "params": { 00:11:23.959 "nqn": "nqn.2016-06.io.spdk:cnode2911", 00:11:23.959 "model_number": "SPDK_Controller\u001f" 00:11:23.959 } 00:11:23.959 } 00:11:23.959 Got JSON-RPC error response 00:11:23.959 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:23.959 17:14:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:23.959 17:14:53 -- target/invalid.sh@19 -- # local length=21 ll 00:11:23.959 17:14:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:23.959 17:14:53 -- target/invalid.sh@21 -- # local chars 00:11:23.959 17:14:53 -- target/invalid.sh@22 -- # local string 00:11:23.959 17:14:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:23.959 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.959 17:14:53 -- target/invalid.sh@25 -- # printf %x 73 00:11:23.959 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:23.959 17:14:53 -- target/invalid.sh@25 -- # string+=I 00:11:23.959 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.959 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 126 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+='~' 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 64 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=@ 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 50 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=2 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 38 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+='&' 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 108 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=l 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 56 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=8 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 107 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=k 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 53 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=5 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 78 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=N 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 58 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=: 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 78 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=N 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 77 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=M 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 104 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # string+=h 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # printf %x 123 00:11:24.218 17:14:53 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # string+='{' 00:11:24.218 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # printf %x 109 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # string+=m 00:11:24.218 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.218 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # printf %x 102 00:11:24.218 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # string+=f 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # printf %x 88 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # string+=X 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # printf %x 111 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # string+=o 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # printf %x 120 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # string+=x 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # printf %x 108 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:24.219 17:14:54 -- target/invalid.sh@25 -- # string+=l 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.219 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.219 17:14:54 -- target/invalid.sh@28 -- # [[ I == \- ]] 00:11:24.219 17:14:54 -- target/invalid.sh@31 -- # echo 'I~@2&l8k5N:NMh{mfXoxl' 00:11:24.219 17:14:54 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I~@2&l8k5N:NMh{mfXoxl' nqn.2016-06.io.spdk:cnode6974 00:11:24.478 [2024-04-25 17:14:54.293219] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6974: invalid serial number 'I~@2&l8k5N:NMh{mfXoxl' 00:11:24.478 17:14:54 -- target/invalid.sh@54 -- # out='2024/04/25 17:14:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6974 serial_number:I~@2&l8k5N:NMh{mfXoxl], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN I~@2&l8k5N:NMh{mfXoxl 00:11:24.478 request: 00:11:24.478 { 00:11:24.478 "method": "nvmf_create_subsystem", 00:11:24.478 "params": { 00:11:24.478 "nqn": "nqn.2016-06.io.spdk:cnode6974", 00:11:24.478 "serial_number": "I~@2&l8k5N:NMh{mfXoxl" 00:11:24.478 } 00:11:24.478 } 00:11:24.478 Got JSON-RPC error response 00:11:24.478 GoRPCClient: error on JSON-RPC call' 00:11:24.478 17:14:54 -- target/invalid.sh@55 -- # [[ 2024/04/25 17:14:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6974 serial_number:I~@2&l8k5N:NMh{mfXoxl], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN I~@2&l8k5N:NMh{mfXoxl 00:11:24.478 request: 00:11:24.478 { 00:11:24.478 "method": "nvmf_create_subsystem", 00:11:24.478 "params": { 00:11:24.478 "nqn": "nqn.2016-06.io.spdk:cnode6974", 00:11:24.478 "serial_number": "I~@2&l8k5N:NMh{mfXoxl" 00:11:24.478 } 00:11:24.478 } 00:11:24.478 Got JSON-RPC error response 00:11:24.478 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:24.478 17:14:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:24.478 17:14:54 -- target/invalid.sh@19 -- # local length=41 ll 00:11:24.478 17:14:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:24.478 17:14:54 -- target/invalid.sh@21 -- # local chars 00:11:24.478 17:14:54 -- target/invalid.sh@22 -- # local string 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 34 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+='"' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 95 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=_ 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 116 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=t 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 119 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=w 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 67 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=C 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 44 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=, 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 124 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+='|' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 116 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=t 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 126 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+='~' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 92 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+='\' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 38 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+='&' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 93 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=']' 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 118 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=v 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 82 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=R 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 88 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=X 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 46 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=. 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 115 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=s 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 117 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=u 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 44 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # string+=, 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.478 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # printf %x 35 00:11:24.478 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+='#' 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 68 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+=D 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 107 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+=k 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 57 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+=9 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 122 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+=z 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 42 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+='*' 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 33 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+='!' 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 94 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+='^' 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # printf %x 76 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:24.479 17:14:54 -- target/invalid.sh@25 -- # string+=L 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.479 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 109 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=m 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 113 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=q 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 69 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=E 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 42 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+='*' 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 74 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=J 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 75 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=K 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 35 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+='#' 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 122 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=z 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 68 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=D 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 70 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=F 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 51 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=3 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 103 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=g 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # printf %x 106 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:24.738 17:14:54 -- target/invalid.sh@25 -- # string+=j 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.738 17:14:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.738 17:14:54 -- target/invalid.sh@28 -- # [[ " == \- ]] 00:11:24.738 17:14:54 -- target/invalid.sh@31 -- # echo '"_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj' 00:11:24.738 17:14:54 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '"_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj' nqn.2016-06.io.spdk:cnode27918 00:11:24.997 [2024-04-25 17:14:54.765706] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27918: invalid model number '"_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj' 00:11:24.997 17:14:54 -- target/invalid.sh@58 -- # out='2024/04/25 17:14:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:"_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj nqn:nqn.2016-06.io.spdk:cnode27918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN "_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj 00:11:24.997 request: 00:11:24.997 { 00:11:24.997 "method": "nvmf_create_subsystem", 00:11:24.997 "params": { 00:11:24.997 "nqn": "nqn.2016-06.io.spdk:cnode27918", 00:11:24.997 "model_number": "\"_twC,|t~\\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj" 00:11:24.997 } 00:11:24.997 } 00:11:24.997 Got JSON-RPC error response 00:11:24.997 GoRPCClient: error on JSON-RPC call' 00:11:24.997 17:14:54 -- target/invalid.sh@59 -- # [[ 2024/04/25 17:14:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:"_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj nqn:nqn.2016-06.io.spdk:cnode27918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN "_twC,|t~\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj 00:11:24.997 request: 00:11:24.997 { 00:11:24.997 "method": "nvmf_create_subsystem", 00:11:24.997 "params": { 00:11:24.997 "nqn": "nqn.2016-06.io.spdk:cnode27918", 00:11:24.997 "model_number": "\"_twC,|t~\\&]vRX.su,#Dk9z*!^LmqE*JK#zDF3gj" 00:11:24.997 } 00:11:24.997 } 00:11:24.997 Got JSON-RPC error response 00:11:24.997 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:24.997 17:14:54 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:25.255 [2024-04-25 17:14:55.042027] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.255 17:14:55 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:25.515 17:14:55 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:25.515 17:14:55 -- target/invalid.sh@67 -- # echo '' 00:11:25.515 17:14:55 -- target/invalid.sh@67 -- # head -n 1 00:11:25.515 17:14:55 -- target/invalid.sh@67 -- # IP= 00:11:25.515 17:14:55 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:25.774 [2024-04-25 17:14:55.554658] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:25.774 17:14:55 -- target/invalid.sh@69 -- # out='2024/04/25 17:14:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:25.774 request: 00:11:25.774 { 00:11:25.774 "method": "nvmf_subsystem_remove_listener", 00:11:25.774 "params": { 00:11:25.774 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:25.774 "listen_address": { 00:11:25.774 "trtype": "tcp", 00:11:25.774 "traddr": "", 00:11:25.774 "trsvcid": "4421" 00:11:25.774 } 00:11:25.774 } 00:11:25.774 } 00:11:25.774 Got JSON-RPC error response 00:11:25.774 GoRPCClient: error on JSON-RPC call' 00:11:25.774 17:14:55 -- target/invalid.sh@70 -- # [[ 2024/04/25 17:14:55 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:25.774 request: 00:11:25.774 { 00:11:25.774 "method": "nvmf_subsystem_remove_listener", 00:11:25.774 "params": { 00:11:25.774 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:25.774 "listen_address": { 00:11:25.774 "trtype": "tcp", 00:11:25.774 "traddr": "", 00:11:25.774 "trsvcid": "4421" 00:11:25.774 } 00:11:25.774 } 00:11:25.774 } 00:11:25.774 Got JSON-RPC error response 00:11:25.774 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:25.774 17:14:55 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23150 -i 0 00:11:26.033 [2024-04-25 17:14:55.790834] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23150: invalid cntlid range [0-65519] 00:11:26.033 17:14:55 -- target/invalid.sh@73 -- # out='2024/04/25 17:14:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23150], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:26.033 request: 00:11:26.033 { 00:11:26.033 "method": "nvmf_create_subsystem", 00:11:26.033 "params": { 00:11:26.033 "nqn": "nqn.2016-06.io.spdk:cnode23150", 00:11:26.033 "min_cntlid": 0 00:11:26.033 } 00:11:26.033 } 00:11:26.033 Got JSON-RPC error response 00:11:26.033 GoRPCClient: error on JSON-RPC call' 00:11:26.033 17:14:55 -- target/invalid.sh@74 -- # [[ 2024/04/25 17:14:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode23150], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:26.033 request: 00:11:26.033 { 00:11:26.033 "method": "nvmf_create_subsystem", 00:11:26.033 "params": { 00:11:26.033 "nqn": "nqn.2016-06.io.spdk:cnode23150", 00:11:26.033 "min_cntlid": 0 00:11:26.033 } 00:11:26.033 } 00:11:26.033 Got JSON-RPC error response 00:11:26.033 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.033 17:14:55 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4049 -i 65520 00:11:26.293 [2024-04-25 17:14:56.075118] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4049: invalid cntlid range [65520-65519] 00:11:26.293 17:14:56 -- target/invalid.sh@75 -- # out='2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:26.293 request: 00:11:26.293 { 00:11:26.293 "method": "nvmf_create_subsystem", 00:11:26.293 "params": { 00:11:26.293 "nqn": "nqn.2016-06.io.spdk:cnode4049", 00:11:26.293 "min_cntlid": 65520 00:11:26.293 } 00:11:26.293 } 00:11:26.293 Got JSON-RPC error response 00:11:26.293 GoRPCClient: error on JSON-RPC call' 00:11:26.293 17:14:56 -- target/invalid.sh@76 -- # [[ 2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:26.293 request: 00:11:26.293 { 00:11:26.293 "method": "nvmf_create_subsystem", 00:11:26.293 "params": { 00:11:26.293 "nqn": "nqn.2016-06.io.spdk:cnode4049", 00:11:26.293 "min_cntlid": 65520 00:11:26.293 } 00:11:26.293 } 00:11:26.293 Got JSON-RPC error response 00:11:26.293 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.293 17:14:56 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14975 -I 0 00:11:26.552 [2024-04-25 17:14:56.379433] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14975: invalid cntlid range [1-0] 00:11:26.552 17:14:56 -- target/invalid.sh@77 -- # out='2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14975], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:26.552 request: 00:11:26.552 { 00:11:26.552 "method": "nvmf_create_subsystem", 00:11:26.552 "params": { 00:11:26.552 "nqn": "nqn.2016-06.io.spdk:cnode14975", 00:11:26.552 "max_cntlid": 0 00:11:26.552 } 00:11:26.552 } 00:11:26.552 Got JSON-RPC error response 00:11:26.552 GoRPCClient: error on JSON-RPC call' 00:11:26.552 17:14:56 -- target/invalid.sh@78 -- # [[ 2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14975], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:26.552 request: 00:11:26.552 { 00:11:26.552 "method": "nvmf_create_subsystem", 00:11:26.552 "params": { 00:11:26.552 "nqn": "nqn.2016-06.io.spdk:cnode14975", 00:11:26.552 "max_cntlid": 0 00:11:26.552 } 00:11:26.552 } 00:11:26.552 Got JSON-RPC error response 00:11:26.552 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.552 17:14:56 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16905 -I 65520 00:11:26.812 [2024-04-25 17:14:56.643689] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16905: invalid cntlid range [1-65520] 00:11:26.812 17:14:56 -- target/invalid.sh@79 -- # out='2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16905], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:26.812 request: 00:11:26.812 { 00:11:26.812 "method": "nvmf_create_subsystem", 00:11:26.812 "params": { 00:11:26.812 "nqn": "nqn.2016-06.io.spdk:cnode16905", 00:11:26.812 "max_cntlid": 65520 00:11:26.812 } 00:11:26.812 } 00:11:26.812 Got JSON-RPC error response 00:11:26.812 GoRPCClient: error on JSON-RPC call' 00:11:26.812 17:14:56 -- target/invalid.sh@80 -- # [[ 2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16905], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:26.812 request: 00:11:26.812 { 00:11:26.812 "method": "nvmf_create_subsystem", 00:11:26.812 "params": { 00:11:26.812 "nqn": "nqn.2016-06.io.spdk:cnode16905", 00:11:26.812 "max_cntlid": 65520 00:11:26.812 } 00:11:26.812 } 00:11:26.812 Got JSON-RPC error response 00:11:26.812 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.812 17:14:56 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16512 -i 6 -I 5 00:11:27.070 [2024-04-25 17:14:56.883904] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16512: invalid cntlid range [6-5] 00:11:27.070 17:14:56 -- target/invalid.sh@83 -- # out='2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:27.070 request: 00:11:27.070 { 00:11:27.070 "method": "nvmf_create_subsystem", 00:11:27.070 "params": { 00:11:27.070 "nqn": "nqn.2016-06.io.spdk:cnode16512", 00:11:27.070 "min_cntlid": 6, 00:11:27.070 "max_cntlid": 5 00:11:27.070 } 00:11:27.070 } 00:11:27.070 Got JSON-RPC error response 00:11:27.070 GoRPCClient: error on JSON-RPC call' 00:11:27.070 17:14:56 -- target/invalid.sh@84 -- # [[ 2024/04/25 17:14:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode16512], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:27.070 request: 00:11:27.070 { 00:11:27.070 "method": "nvmf_create_subsystem", 00:11:27.070 "params": { 00:11:27.070 "nqn": "nqn.2016-06.io.spdk:cnode16512", 00:11:27.070 "min_cntlid": 6, 00:11:27.070 "max_cntlid": 5 00:11:27.070 } 00:11:27.070 } 00:11:27.070 Got JSON-RPC error response 00:11:27.070 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.070 17:14:56 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:27.070 17:14:57 -- target/invalid.sh@87 -- # out='request: 00:11:27.070 { 00:11:27.070 "name": "foobar", 00:11:27.070 "method": "nvmf_delete_target", 00:11:27.070 "req_id": 1 00:11:27.070 } 00:11:27.070 Got JSON-RPC error response 00:11:27.070 response: 00:11:27.070 { 00:11:27.070 "code": -32602, 00:11:27.070 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:27.070 }' 00:11:27.070 17:14:57 -- target/invalid.sh@88 -- # [[ request: 00:11:27.070 { 00:11:27.070 "name": "foobar", 00:11:27.070 "method": "nvmf_delete_target", 00:11:27.070 "req_id": 1 00:11:27.070 } 00:11:27.070 Got JSON-RPC error response 00:11:27.070 response: 00:11:27.070 { 00:11:27.070 "code": -32602, 00:11:27.070 "message": "The specified target doesn't exist, cannot delete it." 00:11:27.070 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:27.070 17:14:57 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:27.070 17:14:57 -- target/invalid.sh@91 -- # nvmftestfini 00:11:27.070 17:14:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:27.070 17:14:57 -- nvmf/common.sh@117 -- # sync 00:11:27.329 17:14:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.329 17:14:57 -- nvmf/common.sh@120 -- # set +e 00:11:27.329 17:14:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.329 17:14:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.329 rmmod nvme_tcp 00:11:27.329 rmmod nvme_fabrics 00:11:27.329 rmmod nvme_keyring 00:11:27.329 17:14:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.329 17:14:57 -- nvmf/common.sh@124 -- # set -e 00:11:27.329 17:14:57 -- nvmf/common.sh@125 -- # return 0 00:11:27.329 17:14:57 -- nvmf/common.sh@478 -- # '[' -n 71234 ']' 00:11:27.329 17:14:57 -- nvmf/common.sh@479 -- # killprocess 71234 00:11:27.329 17:14:57 -- common/autotest_common.sh@936 -- # '[' -z 71234 ']' 00:11:27.329 17:14:57 -- common/autotest_common.sh@940 -- # kill -0 71234 00:11:27.329 17:14:57 -- common/autotest_common.sh@941 -- # uname 00:11:27.329 17:14:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.329 17:14:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71234 00:11:27.329 killing process with pid 71234 00:11:27.329 17:14:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:27.329 17:14:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:27.329 17:14:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71234' 00:11:27.329 17:14:57 -- common/autotest_common.sh@955 -- # kill 71234 00:11:27.329 17:14:57 -- common/autotest_common.sh@960 -- # wait 71234 00:11:27.589 17:14:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:27.589 17:14:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:27.589 17:14:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:27.589 17:14:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.589 17:14:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.589 17:14:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.589 17:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.589 17:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.589 17:14:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:27.589 00:11:27.589 real 0m5.800s 00:11:27.589 user 0m23.320s 00:11:27.589 sys 0m1.204s 00:11:27.589 17:14:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.589 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.589 ************************************ 00:11:27.589 END TEST nvmf_invalid 00:11:27.589 ************************************ 00:11:27.589 17:14:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:27.589 17:14:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:27.589 17:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.589 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:11:27.589 ************************************ 00:11:27.589 START TEST nvmf_abort 00:11:27.589 ************************************ 00:11:27.589 17:14:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:27.589 * Looking for test storage... 00:11:27.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.589 17:14:57 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.589 17:14:57 -- nvmf/common.sh@7 -- # uname -s 00:11:27.589 17:14:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.589 17:14:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.589 17:14:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.589 17:14:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.589 17:14:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.589 17:14:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.589 17:14:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.589 17:14:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.589 17:14:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.589 17:14:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.589 17:14:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:27.589 17:14:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:27.589 17:14:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.589 17:14:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.589 17:14:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.589 17:14:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.589 17:14:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.589 17:14:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.589 17:14:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.589 17:14:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.589 17:14:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.589 17:14:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.589 17:14:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.590 17:14:57 -- paths/export.sh@5 -- # export PATH 00:11:27.590 17:14:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.590 17:14:57 -- nvmf/common.sh@47 -- # : 0 00:11:27.590 17:14:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.849 17:14:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.849 17:14:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.849 17:14:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.849 17:14:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.849 17:14:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.849 17:14:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.849 17:14:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.849 17:14:57 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.849 17:14:57 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:27.849 17:14:57 -- target/abort.sh@14 -- # nvmftestinit 00:11:27.849 17:14:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:27.849 17:14:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.849 17:14:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:27.849 17:14:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:27.849 17:14:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:27.849 17:14:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.849 17:14:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.849 17:14:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.849 17:14:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:27.849 17:14:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:27.849 17:14:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:27.849 17:14:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:27.849 17:14:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:27.849 17:14:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:27.849 17:14:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.849 17:14:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.849 17:14:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.849 17:14:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:27.849 17:14:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.849 17:14:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.849 17:14:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.849 17:14:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.849 17:14:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.849 17:14:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.849 17:14:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.849 17:14:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.849 17:14:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:27.849 17:14:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:27.849 Cannot find device "nvmf_tgt_br" 00:11:27.849 17:14:57 -- nvmf/common.sh@155 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.849 Cannot find device "nvmf_tgt_br2" 00:11:27.849 17:14:57 -- nvmf/common.sh@156 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:27.849 17:14:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:27.849 Cannot find device "nvmf_tgt_br" 00:11:27.849 17:14:57 -- nvmf/common.sh@158 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:27.849 Cannot find device "nvmf_tgt_br2" 00:11:27.849 17:14:57 -- nvmf/common.sh@159 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:27.849 17:14:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:27.849 17:14:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.849 17:14:57 -- nvmf/common.sh@162 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.849 17:14:57 -- nvmf/common.sh@163 -- # true 00:11:27.849 17:14:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.849 17:14:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.849 17:14:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.849 17:14:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.849 17:14:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.849 17:14:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.849 17:14:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.849 17:14:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.849 17:14:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.849 17:14:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:27.849 17:14:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:27.849 17:14:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:27.849 17:14:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:27.849 17:14:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.109 17:14:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.109 17:14:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.109 17:14:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:28.109 17:14:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:28.109 17:14:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.109 17:14:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.109 17:14:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.109 17:14:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.109 17:14:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.109 17:14:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:28.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:11:28.109 00:11:28.109 --- 10.0.0.2 ping statistics --- 00:11:28.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.109 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:28.109 17:14:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:28.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:11:28.109 00:11:28.109 --- 10.0.0.3 ping statistics --- 00:11:28.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.109 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:28.109 17:14:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:28.109 00:11:28.109 --- 10.0.0.1 ping statistics --- 00:11:28.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.109 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:28.109 17:14:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.109 17:14:57 -- nvmf/common.sh@422 -- # return 0 00:11:28.109 17:14:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:28.109 17:14:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.109 17:14:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:28.109 17:14:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:28.109 17:14:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.109 17:14:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:28.109 17:14:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:28.109 17:14:57 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:28.109 17:14:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:28.109 17:14:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:28.109 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:11:28.109 17:14:57 -- nvmf/common.sh@470 -- # nvmfpid=71747 00:11:28.109 17:14:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:28.109 17:14:57 -- nvmf/common.sh@471 -- # waitforlisten 71747 00:11:28.109 17:14:57 -- common/autotest_common.sh@817 -- # '[' -z 71747 ']' 00:11:28.109 17:14:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.109 17:14:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:28.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.109 17:14:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.109 17:14:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:28.109 17:14:57 -- common/autotest_common.sh@10 -- # set +x 00:11:28.109 [2024-04-25 17:14:57.994013] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:28.109 [2024-04-25 17:14:57.994119] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.369 [2024-04-25 17:14:58.133599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.369 [2024-04-25 17:14:58.200327] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.369 [2024-04-25 17:14:58.200383] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.369 [2024-04-25 17:14:58.200396] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.369 [2024-04-25 17:14:58.200407] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.369 [2024-04-25 17:14:58.200415] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.369 [2024-04-25 17:14:58.200576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.369 [2024-04-25 17:14:58.201278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.369 [2024-04-25 17:14:58.201328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.306 17:14:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:29.306 17:14:58 -- common/autotest_common.sh@850 -- # return 0 00:11:29.306 17:14:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:29.306 17:14:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:29.306 17:14:58 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 17:14:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.306 17:14:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 [2024-04-25 17:14:59.012302] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 Malloc0 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 Delay0 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 [2024-04-25 17:14:59.084593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.306 17:14:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.306 17:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 17:14:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.306 17:14:59 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:29.306 [2024-04-25 17:14:59.266565] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:31.842 Initializing NVMe Controllers 00:11:31.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:31.843 controller IO queue size 128 less than required 00:11:31.843 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:31.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:31.843 Initialization complete. Launching workers. 00:11:31.843 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33004 00:11:31.843 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33065, failed to submit 62 00:11:31.843 success 33008, unsuccess 57, failed 0 00:11:31.843 17:15:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:31.843 17:15:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:31.843 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 17:15:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:31.843 17:15:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:31.843 17:15:01 -- target/abort.sh@38 -- # nvmftestfini 00:11:31.843 17:15:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:31.843 17:15:01 -- nvmf/common.sh@117 -- # sync 00:11:31.843 17:15:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.843 17:15:01 -- nvmf/common.sh@120 -- # set +e 00:11:31.843 17:15:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.843 17:15:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.843 rmmod nvme_tcp 00:11:31.843 rmmod nvme_fabrics 00:11:31.843 rmmod nvme_keyring 00:11:31.843 17:15:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.843 17:15:01 -- nvmf/common.sh@124 -- # set -e 00:11:31.843 17:15:01 -- nvmf/common.sh@125 -- # return 0 00:11:31.843 17:15:01 -- nvmf/common.sh@478 -- # '[' -n 71747 ']' 00:11:31.843 17:15:01 -- nvmf/common.sh@479 -- # killprocess 71747 00:11:31.843 17:15:01 -- common/autotest_common.sh@936 -- # '[' -z 71747 ']' 00:11:31.843 17:15:01 -- common/autotest_common.sh@940 -- # kill -0 71747 00:11:31.843 17:15:01 -- common/autotest_common.sh@941 -- # uname 00:11:31.843 17:15:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.843 17:15:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71747 00:11:31.843 17:15:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:31.843 killing process with pid 71747 00:11:31.843 17:15:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:31.843 17:15:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71747' 00:11:31.843 17:15:01 -- common/autotest_common.sh@955 -- # kill 71747 00:11:31.843 17:15:01 -- common/autotest_common.sh@960 -- # wait 71747 00:11:31.843 17:15:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:31.843 17:15:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:31.843 17:15:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:31.843 17:15:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.843 17:15:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.843 17:15:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.843 17:15:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.843 17:15:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.843 17:15:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:31.843 00:11:31.843 real 0m4.219s 00:11:31.843 user 0m12.232s 00:11:31.843 sys 0m0.940s 00:11:31.843 17:15:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.843 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 ************************************ 00:11:31.843 END TEST nvmf_abort 00:11:31.843 ************************************ 00:11:31.843 17:15:01 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:31.843 17:15:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:31.843 17:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.843 17:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.843 ************************************ 00:11:31.843 START TEST nvmf_ns_hotplug_stress 00:11:31.843 ************************************ 00:11:31.843 17:15:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:32.102 * Looking for test storage... 00:11:32.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.102 17:15:01 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.102 17:15:01 -- nvmf/common.sh@7 -- # uname -s 00:11:32.102 17:15:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.102 17:15:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.102 17:15:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.102 17:15:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.102 17:15:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.102 17:15:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.102 17:15:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.102 17:15:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.102 17:15:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.102 17:15:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:32.102 17:15:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:11:32.102 17:15:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.102 17:15:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.102 17:15:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.102 17:15:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.102 17:15:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.102 17:15:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.102 17:15:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.102 17:15:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.102 17:15:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 17:15:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 17:15:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 17:15:01 -- paths/export.sh@5 -- # export PATH 00:11:32.102 17:15:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 17:15:01 -- nvmf/common.sh@47 -- # : 0 00:11:32.102 17:15:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.102 17:15:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.102 17:15:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.102 17:15:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.102 17:15:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.102 17:15:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.102 17:15:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.102 17:15:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.102 17:15:01 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.102 17:15:01 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:11:32.102 17:15:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:32.102 17:15:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.102 17:15:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:32.102 17:15:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:32.102 17:15:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:32.102 17:15:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.102 17:15:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.102 17:15:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.102 17:15:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:32.102 17:15:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:32.102 17:15:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.102 17:15:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.102 17:15:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:32.102 17:15:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:32.102 17:15:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.102 17:15:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.102 17:15:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.102 17:15:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.102 17:15:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.102 17:15:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.102 17:15:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.102 17:15:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.102 17:15:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:32.102 17:15:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:32.102 Cannot find device "nvmf_tgt_br" 00:11:32.102 17:15:01 -- nvmf/common.sh@155 -- # true 00:11:32.102 17:15:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.103 Cannot find device "nvmf_tgt_br2" 00:11:32.103 17:15:01 -- nvmf/common.sh@156 -- # true 00:11:32.103 17:15:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:32.103 17:15:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:32.103 Cannot find device "nvmf_tgt_br" 00:11:32.103 17:15:01 -- nvmf/common.sh@158 -- # true 00:11:32.103 17:15:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:32.103 Cannot find device "nvmf_tgt_br2" 00:11:32.103 17:15:01 -- nvmf/common.sh@159 -- # true 00:11:32.103 17:15:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:32.103 17:15:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:32.103 17:15:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.103 17:15:02 -- nvmf/common.sh@162 -- # true 00:11:32.103 17:15:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.103 17:15:02 -- nvmf/common.sh@163 -- # true 00:11:32.103 17:15:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.103 17:15:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.103 17:15:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.362 17:15:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.362 17:15:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.362 17:15:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.362 17:15:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.362 17:15:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:32.362 17:15:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:32.362 17:15:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:32.362 17:15:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:32.362 17:15:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:32.362 17:15:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:32.362 17:15:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.362 17:15:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.362 17:15:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.362 17:15:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:32.362 17:15:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:32.362 17:15:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.362 17:15:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.362 17:15:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.362 17:15:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.362 17:15:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.362 17:15:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:32.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:11:32.362 00:11:32.362 --- 10.0.0.2 ping statistics --- 00:11:32.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.362 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:32.362 17:15:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:32.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:32.362 00:11:32.362 --- 10.0.0.3 ping statistics --- 00:11:32.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.362 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:32.362 17:15:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:32.362 00:11:32.362 --- 10.0.0.1 ping statistics --- 00:11:32.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.362 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:32.362 17:15:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.362 17:15:02 -- nvmf/common.sh@422 -- # return 0 00:11:32.362 17:15:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:32.362 17:15:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.362 17:15:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:32.362 17:15:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:32.362 17:15:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.362 17:15:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:32.362 17:15:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:32.362 17:15:02 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:32.362 17:15:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:32.362 17:15:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.362 17:15:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 17:15:02 -- nvmf/common.sh@470 -- # nvmfpid=72011 00:11:32.362 17:15:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:32.362 17:15:02 -- nvmf/common.sh@471 -- # waitforlisten 72011 00:11:32.362 17:15:02 -- common/autotest_common.sh@817 -- # '[' -z 72011 ']' 00:11:32.362 17:15:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.362 17:15:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:32.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.362 17:15:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.362 17:15:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:32.362 17:15:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 [2024-04-25 17:15:02.331876] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:11:32.362 [2024-04-25 17:15:02.331967] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.621 [2024-04-25 17:15:02.470723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.621 [2024-04-25 17:15:02.521634] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.621 [2024-04-25 17:15:02.521875] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.621 [2024-04-25 17:15:02.521967] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.621 [2024-04-25 17:15:02.522036] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.621 [2024-04-25 17:15:02.522095] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.621 [2024-04-25 17:15:02.522353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.621 [2024-04-25 17:15:02.522907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.621 [2024-04-25 17:15:02.522926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.880 17:15:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.880 17:15:02 -- common/autotest_common.sh@850 -- # return 0 00:11:32.880 17:15:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:32.880 17:15:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:32.880 17:15:02 -- common/autotest_common.sh@10 -- # set +x 00:11:32.880 17:15:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.880 17:15:02 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:32.880 17:15:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:33.138 [2024-04-25 17:15:02.948203] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.138 17:15:02 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:33.396 17:15:03 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.655 [2024-04-25 17:15:03.554417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.655 17:15:03 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.913 17:15:03 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:34.172 Malloc0 00:11:34.172 17:15:04 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:34.431 Delay0 00:11:34.431 17:15:04 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.690 17:15:04 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:34.948 NULL1 00:11:34.948 17:15:04 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:35.207 17:15:05 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=72128 00:11:35.207 17:15:05 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:35.207 17:15:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:35.207 17:15:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.583 Read completed with error (sct=0, sc=11) 00:11:36.583 17:15:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.583 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.842 17:15:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:11:36.842 17:15:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:37.101 true 00:11:37.101 17:15:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:37.101 17:15:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.038 17:15:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.038 17:15:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:11:38.038 17:15:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:38.297 true 00:11:38.297 17:15:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:38.297 17:15:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.864 17:15:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.864 17:15:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:11:38.864 17:15:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:39.124 true 00:11:39.124 17:15:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:39.124 17:15:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.383 17:15:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.669 17:15:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:11:39.669 17:15:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:39.950 true 00:11:39.950 17:15:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:39.950 17:15:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.892 17:15:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.150 17:15:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:11:41.150 17:15:10 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:41.409 true 00:11:41.409 17:15:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:41.409 17:15:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.667 17:15:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.925 17:15:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:11:41.925 17:15:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:42.184 true 00:11:42.184 17:15:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:42.184 17:15:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.442 17:15:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.442 17:15:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:11:42.442 17:15:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:42.700 true 00:11:42.700 17:15:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:42.700 17:15:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 17:15:13 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:44.073 17:15:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:11:44.073 17:15:13 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:44.332 true 00:11:44.332 17:15:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:44.332 17:15:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.266 17:15:14 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.525 17:15:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:11:45.525 17:15:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:45.525 true 00:11:45.525 17:15:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:45.525 17:15:15 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.784 17:15:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.042 17:15:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:11:46.042 17:15:15 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:46.300 true 00:11:46.300 17:15:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:46.300 17:15:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.560 17:15:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.819 17:15:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:11:46.819 17:15:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:47.077 true 00:11:47.077 17:15:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:47.077 17:15:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.455 17:15:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:48.455 17:15:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:11:48.455 17:15:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:48.714 true 00:11:48.714 17:15:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:48.714 17:15:18 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.650 17:15:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.650 17:15:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:11:49.650 17:15:19 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:49.909 true 00:11:49.909 17:15:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:49.909 17:15:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.167 17:15:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.426 17:15:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:11:50.426 17:15:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:50.685 true 00:11:50.685 17:15:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:50.685 17:15:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.621 17:15:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:51.621 17:15:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:11:51.621 17:15:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:51.879 true 00:11:51.879 17:15:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:51.879 17:15:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.137 17:15:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.395 17:15:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:11:52.395 17:15:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:52.653 true 00:11:52.653 17:15:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:52.653 17:15:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.933 17:15:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.203 17:15:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:11:53.203 17:15:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:53.461 true 00:11:53.461 17:15:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:53.461 17:15:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.397 17:15:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.655 17:15:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:11:54.656 17:15:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:54.914 true 00:11:54.914 17:15:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:54.914 17:15:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.172 17:15:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.430 17:15:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:11:55.430 17:15:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:55.689 true 00:11:55.689 17:15:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:55.689 17:15:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.946 17:15:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.204 17:15:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:11:56.204 17:15:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:56.462 true 00:11:56.462 17:15:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:56.462 17:15:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.394 17:15:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.652 17:15:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:11:57.652 17:15:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:57.909 true 00:11:57.909 17:15:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:57.909 17:15:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.166 17:15:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.422 17:15:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:11:58.422 17:15:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:58.681 true 00:11:58.681 17:15:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:58.681 17:15:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.638 17:15:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.638 17:15:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:11:59.638 17:15:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:59.895 true 00:11:59.895 17:15:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:11:59.895 17:15:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.152 17:15:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.410 17:15:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:00.410 17:15:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:00.668 true 00:12:00.668 17:15:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:00.668 17:15:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.604 17:15:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.863 17:15:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:01.863 17:15:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:01.863 true 00:12:02.121 17:15:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:02.121 17:15:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.121 17:15:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.379 17:15:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:02.379 17:15:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:02.637 true 00:12:02.637 17:15:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:02.637 17:15:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.572 17:15:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.830 17:15:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:03.830 17:15:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:04.087 true 00:12:04.087 17:15:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:04.087 17:15:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.346 17:15:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.346 17:15:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:04.346 17:15:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:04.604 true 00:12:04.604 17:15:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:04.604 17:15:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.539 Initializing NVMe Controllers 00:12:05.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.539 Controller IO queue size 128, less than required. 00:12:05.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:05.539 Controller IO queue size 128, less than required. 00:12:05.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:05.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:05.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:05.539 Initialization complete. Launching workers. 00:12:05.539 ======================================================== 00:12:05.540 Latency(us) 00:12:05.540 Device Information : IOPS MiB/s Average min max 00:12:05.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 753.17 0.37 88256.14 3303.70 1142563.12 00:12:05.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10161.83 4.96 12595.94 3995.62 662114.58 00:12:05.540 ======================================================== 00:12:05.540 Total : 10915.00 5.33 17816.71 3303.70 1142563.12 00:12:05.540 00:12:05.540 17:15:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.798 17:15:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:12:05.798 17:15:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:06.057 true 00:12:06.057 17:15:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 72128 00:12:06.057 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (72128) - No such process 00:12:06.057 17:15:35 -- target/ns_hotplug_stress.sh@44 -- # wait 72128 00:12:06.057 17:15:35 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:06.057 17:15:35 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:12:06.057 17:15:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:06.057 17:15:35 -- nvmf/common.sh@117 -- # sync 00:12:06.057 17:15:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.057 17:15:35 -- nvmf/common.sh@120 -- # set +e 00:12:06.057 17:15:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.057 17:15:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.057 rmmod nvme_tcp 00:12:06.057 rmmod nvme_fabrics 00:12:06.057 rmmod nvme_keyring 00:12:06.057 17:15:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.057 17:15:35 -- nvmf/common.sh@124 -- # set -e 00:12:06.057 17:15:35 -- nvmf/common.sh@125 -- # return 0 00:12:06.057 17:15:35 -- nvmf/common.sh@478 -- # '[' -n 72011 ']' 00:12:06.057 17:15:35 -- nvmf/common.sh@479 -- # killprocess 72011 00:12:06.057 17:15:35 -- common/autotest_common.sh@936 -- # '[' -z 72011 ']' 00:12:06.057 17:15:35 -- common/autotest_common.sh@940 -- # kill -0 72011 00:12:06.057 17:15:35 -- common/autotest_common.sh@941 -- # uname 00:12:06.057 17:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.057 17:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72011 00:12:06.057 killing process with pid 72011 00:12:06.057 17:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:06.057 17:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:06.057 17:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72011' 00:12:06.057 17:15:35 -- common/autotest_common.sh@955 -- # kill 72011 00:12:06.057 17:15:35 -- common/autotest_common.sh@960 -- # wait 72011 00:12:06.315 17:15:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:06.315 17:15:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:06.315 17:15:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:06.315 17:15:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.315 17:15:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.315 17:15:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.315 17:15:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.315 17:15:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.315 17:15:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:06.315 00:12:06.315 real 0m34.379s 00:12:06.315 user 2m27.215s 00:12:06.315 sys 0m7.570s 00:12:06.315 17:15:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:06.315 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.315 ************************************ 00:12:06.315 END TEST nvmf_ns_hotplug_stress 00:12:06.315 ************************************ 00:12:06.315 17:15:36 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:06.315 17:15:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:06.315 17:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:06.315 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.315 ************************************ 00:12:06.315 START TEST nvmf_connect_stress 00:12:06.315 ************************************ 00:12:06.315 17:15:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:06.574 * Looking for test storage... 00:12:06.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:06.574 17:15:36 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:06.574 17:15:36 -- nvmf/common.sh@7 -- # uname -s 00:12:06.574 17:15:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.574 17:15:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.574 17:15:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.574 17:15:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.574 17:15:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.574 17:15:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.574 17:15:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.574 17:15:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.574 17:15:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.574 17:15:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.574 17:15:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:06.574 17:15:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:06.574 17:15:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.574 17:15:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.574 17:15:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:06.574 17:15:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.574 17:15:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:06.575 17:15:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.575 17:15:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.575 17:15:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.575 17:15:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.575 17:15:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.575 17:15:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.575 17:15:36 -- paths/export.sh@5 -- # export PATH 00:12:06.575 17:15:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.575 17:15:36 -- nvmf/common.sh@47 -- # : 0 00:12:06.575 17:15:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.575 17:15:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.575 17:15:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.575 17:15:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.575 17:15:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.575 17:15:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.575 17:15:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.575 17:15:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.575 17:15:36 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:06.575 17:15:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:06.575 17:15:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.575 17:15:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:06.575 17:15:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:06.575 17:15:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:06.575 17:15:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.575 17:15:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.575 17:15:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.575 17:15:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:06.575 17:15:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:06.575 17:15:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:06.575 17:15:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:06.575 17:15:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:06.575 17:15:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:06.575 17:15:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.575 17:15:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.575 17:15:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:06.575 17:15:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:06.575 17:15:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:06.575 17:15:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:06.575 17:15:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:06.575 17:15:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.575 17:15:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:06.575 17:15:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:06.575 17:15:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:06.575 17:15:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:06.575 17:15:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:06.575 17:15:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:06.575 Cannot find device "nvmf_tgt_br" 00:12:06.575 17:15:36 -- nvmf/common.sh@155 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:06.575 Cannot find device "nvmf_tgt_br2" 00:12:06.575 17:15:36 -- nvmf/common.sh@156 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:06.575 17:15:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:06.575 Cannot find device "nvmf_tgt_br" 00:12:06.575 17:15:36 -- nvmf/common.sh@158 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:06.575 Cannot find device "nvmf_tgt_br2" 00:12:06.575 17:15:36 -- nvmf/common.sh@159 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:06.575 17:15:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:06.575 17:15:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:06.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.575 17:15:36 -- nvmf/common.sh@162 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:06.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:06.575 17:15:36 -- nvmf/common.sh@163 -- # true 00:12:06.575 17:15:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:06.575 17:15:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:06.575 17:15:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:06.575 17:15:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:06.575 17:15:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:06.834 17:15:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:06.834 17:15:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:06.834 17:15:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:06.834 17:15:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:06.834 17:15:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:06.834 17:15:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:06.834 17:15:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:06.834 17:15:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:06.834 17:15:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:06.834 17:15:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:06.834 17:15:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:06.834 17:15:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:06.834 17:15:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:06.834 17:15:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:06.834 17:15:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:06.834 17:15:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.834 17:15:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.834 17:15:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.834 17:15:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:06.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:06.834 00:12:06.834 --- 10.0.0.2 ping statistics --- 00:12:06.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.834 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:06.834 17:15:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:06.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:06.834 00:12:06.834 --- 10.0.0.3 ping statistics --- 00:12:06.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.834 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:06.834 17:15:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:06.834 00:12:06.834 --- 10.0.0.1 ping statistics --- 00:12:06.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.834 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:06.834 17:15:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.834 17:15:36 -- nvmf/common.sh@422 -- # return 0 00:12:06.834 17:15:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:06.834 17:15:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.834 17:15:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:06.834 17:15:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:06.834 17:15:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.834 17:15:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:06.834 17:15:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:06.834 17:15:36 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:06.834 17:15:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:06.834 17:15:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:06.834 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.834 17:15:36 -- nvmf/common.sh@470 -- # nvmfpid=73258 00:12:06.834 17:15:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:06.834 17:15:36 -- nvmf/common.sh@471 -- # waitforlisten 73258 00:12:06.834 17:15:36 -- common/autotest_common.sh@817 -- # '[' -z 73258 ']' 00:12:06.834 17:15:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.834 17:15:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:06.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.834 17:15:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.834 17:15:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:06.834 17:15:36 -- common/autotest_common.sh@10 -- # set +x 00:12:06.834 [2024-04-25 17:15:36.786233] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:06.834 [2024-04-25 17:15:36.786323] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.094 [2024-04-25 17:15:36.926382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.094 [2024-04-25 17:15:36.994791] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.094 [2024-04-25 17:15:36.994863] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.094 [2024-04-25 17:15:36.994885] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.094 [2024-04-25 17:15:36.994900] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.094 [2024-04-25 17:15:36.994912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.094 [2024-04-25 17:15:36.995068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.094 [2024-04-25 17:15:36.995761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.094 [2024-04-25 17:15:36.995789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.031 17:15:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:08.031 17:15:37 -- common/autotest_common.sh@850 -- # return 0 00:12:08.031 17:15:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:08.031 17:15:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 17:15:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.031 17:15:37 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.031 17:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 [2024-04-25 17:15:37.798201] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.031 17:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.031 17:15:37 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.031 17:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 17:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.031 17:15:37 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.031 17:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 [2024-04-25 17:15:37.815460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.031 17:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.031 17:15:37 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.031 17:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.031 NULL1 00:12:08.031 17:15:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.031 17:15:37 -- target/connect_stress.sh@21 -- # PERF_PID=73310 00:12:08.031 17:15:37 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:08.031 17:15:37 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:08.031 17:15:37 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.031 17:15:37 -- target/connect_stress.sh@28 -- # cat 00:12:08.031 17:15:37 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:08.031 17:15:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.031 17:15:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.031 17:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 17:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.290 17:15:38 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:08.290 17:15:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.290 17:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.290 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:12:08.857 17:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:08.857 17:15:38 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:08.857 17:15:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.857 17:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:08.857 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:12:09.115 17:15:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.115 17:15:38 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:09.115 17:15:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.115 17:15:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.115 17:15:38 -- common/autotest_common.sh@10 -- # set +x 00:12:09.374 17:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.374 17:15:39 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:09.374 17:15:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.374 17:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.374 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:09.632 17:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.632 17:15:39 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:09.632 17:15:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.632 17:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.632 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:09.891 17:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.891 17:15:39 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:09.891 17:15:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.891 17:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.891 17:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:10.460 17:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.460 17:15:40 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:10.460 17:15:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.460 17:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.460 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.718 17:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.719 17:15:40 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:10.719 17:15:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.719 17:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.719 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.977 17:15:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:10.977 17:15:40 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:10.977 17:15:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.977 17:15:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:10.977 17:15:40 -- common/autotest_common.sh@10 -- # set +x 00:12:11.236 17:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.236 17:15:41 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:11.236 17:15:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.236 17:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.236 17:15:41 -- common/autotest_common.sh@10 -- # set +x 00:12:11.495 17:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:11.495 17:15:41 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:11.495 17:15:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.495 17:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:11.495 17:15:41 -- common/autotest_common.sh@10 -- # set +x 00:12:12.062 17:15:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.062 17:15:41 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:12.062 17:15:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.062 17:15:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.062 17:15:41 -- common/autotest_common.sh@10 -- # set +x 00:12:12.321 17:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.321 17:15:42 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:12.321 17:15:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.321 17:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.321 17:15:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.603 17:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.603 17:15:42 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:12.603 17:15:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.603 17:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.603 17:15:42 -- common/autotest_common.sh@10 -- # set +x 00:12:12.861 17:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.861 17:15:42 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:12.861 17:15:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.861 17:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.861 17:15:42 -- common/autotest_common.sh@10 -- # set +x 00:12:13.120 17:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.120 17:15:43 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:13.120 17:15:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.120 17:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.120 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.699 17:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.699 17:15:43 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:13.699 17:15:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.699 17:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.699 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.958 17:15:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:13.958 17:15:43 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:13.958 17:15:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.958 17:15:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:13.958 17:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:14.216 17:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:14.216 17:15:44 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:14.216 17:15:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.216 17:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:14.216 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:12:14.475 17:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:14.475 17:15:44 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:14.475 17:15:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.475 17:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:14.475 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:12:14.733 17:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:14.733 17:15:44 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:14.733 17:15:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.733 17:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:14.733 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:12:15.303 17:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.303 17:15:44 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:15.303 17:15:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.303 17:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.303 17:15:44 -- common/autotest_common.sh@10 -- # set +x 00:12:15.561 17:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.562 17:15:45 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:15.562 17:15:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.562 17:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.562 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:12:15.820 17:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:15.820 17:15:45 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:15.820 17:15:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.820 17:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:15.820 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:12:16.078 17:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.078 17:15:45 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:16.078 17:15:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.078 17:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.078 17:15:45 -- common/autotest_common.sh@10 -- # set +x 00:12:16.337 17:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.337 17:15:46 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:16.337 17:15:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.337 17:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.337 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:12:16.904 17:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.904 17:15:46 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:16.904 17:15:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.904 17:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.904 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:12:17.162 17:15:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:17.162 17:15:46 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:17.162 17:15:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.162 17:15:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.162 17:15:46 -- common/autotest_common.sh@10 -- # set +x 00:12:17.421 17:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:17.421 17:15:47 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:17.421 17:15:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.421 17:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.421 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:17.679 17:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:17.679 17:15:47 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:17.679 17:15:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.679 17:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.679 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:17.938 17:15:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:17.938 17:15:47 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:17.938 17:15:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.938 17:15:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:17.938 17:15:47 -- common/autotest_common.sh@10 -- # set +x 00:12:18.196 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:18.455 17:15:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.455 17:15:48 -- target/connect_stress.sh@34 -- # kill -0 73310 00:12:18.455 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (73310) - No such process 00:12:18.455 17:15:48 -- target/connect_stress.sh@38 -- # wait 73310 00:12:18.455 17:15:48 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:18.455 17:15:48 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:18.455 17:15:48 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:18.455 17:15:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:18.455 17:15:48 -- nvmf/common.sh@117 -- # sync 00:12:18.455 17:15:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.455 17:15:48 -- nvmf/common.sh@120 -- # set +e 00:12:18.455 17:15:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.455 17:15:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.455 rmmod nvme_tcp 00:12:18.455 rmmod nvme_fabrics 00:12:18.455 rmmod nvme_keyring 00:12:18.455 17:15:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.455 17:15:48 -- nvmf/common.sh@124 -- # set -e 00:12:18.455 17:15:48 -- nvmf/common.sh@125 -- # return 0 00:12:18.455 17:15:48 -- nvmf/common.sh@478 -- # '[' -n 73258 ']' 00:12:18.455 17:15:48 -- nvmf/common.sh@479 -- # killprocess 73258 00:12:18.455 17:15:48 -- common/autotest_common.sh@936 -- # '[' -z 73258 ']' 00:12:18.455 17:15:48 -- common/autotest_common.sh@940 -- # kill -0 73258 00:12:18.455 17:15:48 -- common/autotest_common.sh@941 -- # uname 00:12:18.455 17:15:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.455 17:15:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73258 00:12:18.455 killing process with pid 73258 00:12:18.455 17:15:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:18.455 17:15:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:18.455 17:15:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73258' 00:12:18.455 17:15:48 -- common/autotest_common.sh@955 -- # kill 73258 00:12:18.455 17:15:48 -- common/autotest_common.sh@960 -- # wait 73258 00:12:18.715 17:15:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:18.715 17:15:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:18.715 17:15:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:18.715 17:15:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.715 17:15:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.715 17:15:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.715 17:15:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.715 17:15:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.715 17:15:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:18.715 00:12:18.715 real 0m12.280s 00:12:18.715 user 0m41.048s 00:12:18.715 sys 0m3.205s 00:12:18.715 17:15:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:18.715 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:12:18.715 ************************************ 00:12:18.715 END TEST nvmf_connect_stress 00:12:18.715 ************************************ 00:12:18.715 17:15:48 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:18.715 17:15:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.715 17:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.715 17:15:48 -- common/autotest_common.sh@10 -- # set +x 00:12:18.715 ************************************ 00:12:18.715 START TEST nvmf_fused_ordering 00:12:18.715 ************************************ 00:12:18.715 17:15:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:18.973 * Looking for test storage... 00:12:18.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.973 17:15:48 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.973 17:15:48 -- nvmf/common.sh@7 -- # uname -s 00:12:18.973 17:15:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.973 17:15:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.973 17:15:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.973 17:15:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.973 17:15:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.973 17:15:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.973 17:15:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.973 17:15:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.973 17:15:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.973 17:15:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.973 17:15:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:18.973 17:15:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:18.973 17:15:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.973 17:15:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.973 17:15:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.973 17:15:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.973 17:15:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.973 17:15:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.973 17:15:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.973 17:15:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.973 17:15:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.973 17:15:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.973 17:15:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.973 17:15:48 -- paths/export.sh@5 -- # export PATH 00:12:18.973 17:15:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.974 17:15:48 -- nvmf/common.sh@47 -- # : 0 00:12:18.974 17:15:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.974 17:15:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.974 17:15:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.974 17:15:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.974 17:15:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.974 17:15:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.974 17:15:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.974 17:15:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.974 17:15:48 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:18.974 17:15:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:18.974 17:15:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.974 17:15:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:18.974 17:15:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:18.974 17:15:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:18.974 17:15:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.974 17:15:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.974 17:15:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.974 17:15:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:18.974 17:15:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:18.974 17:15:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:18.974 17:15:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:18.974 17:15:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:18.974 17:15:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:18.974 17:15:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.974 17:15:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.974 17:15:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.974 17:15:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:18.974 17:15:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.974 17:15:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.974 17:15:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.974 17:15:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.974 17:15:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.974 17:15:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.974 17:15:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.974 17:15:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.974 17:15:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:18.974 17:15:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:18.974 Cannot find device "nvmf_tgt_br" 00:12:18.974 17:15:48 -- nvmf/common.sh@155 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.974 Cannot find device "nvmf_tgt_br2" 00:12:18.974 17:15:48 -- nvmf/common.sh@156 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:18.974 17:15:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:18.974 Cannot find device "nvmf_tgt_br" 00:12:18.974 17:15:48 -- nvmf/common.sh@158 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:18.974 Cannot find device "nvmf_tgt_br2" 00:12:18.974 17:15:48 -- nvmf/common.sh@159 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:18.974 17:15:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:18.974 17:15:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.974 17:15:48 -- nvmf/common.sh@162 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.974 17:15:48 -- nvmf/common.sh@163 -- # true 00:12:18.974 17:15:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.974 17:15:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.974 17:15:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.974 17:15:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.974 17:15:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.232 17:15:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.232 17:15:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.232 17:15:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.232 17:15:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.232 17:15:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:19.232 17:15:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:19.232 17:15:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:19.232 17:15:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:19.232 17:15:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.232 17:15:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.232 17:15:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.232 17:15:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:19.232 17:15:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:19.232 17:15:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.232 17:15:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.232 17:15:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.232 17:15:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.232 17:15:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.232 17:15:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:19.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:19.233 00:12:19.233 --- 10.0.0.2 ping statistics --- 00:12:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.233 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:19.233 17:15:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:19.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:19.233 00:12:19.233 --- 10.0.0.3 ping statistics --- 00:12:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.233 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:19.233 17:15:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:12:19.233 00:12:19.233 --- 10.0.0.1 ping statistics --- 00:12:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.233 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:19.233 17:15:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.233 17:15:49 -- nvmf/common.sh@422 -- # return 0 00:12:19.233 17:15:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:19.233 17:15:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.233 17:15:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:19.233 17:15:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:19.233 17:15:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.233 17:15:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:19.233 17:15:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:19.233 17:15:49 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:19.233 17:15:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:19.233 17:15:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:19.233 17:15:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.233 17:15:49 -- nvmf/common.sh@470 -- # nvmfpid=73641 00:12:19.233 17:15:49 -- nvmf/common.sh@471 -- # waitforlisten 73641 00:12:19.233 17:15:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:19.233 17:15:49 -- common/autotest_common.sh@817 -- # '[' -z 73641 ']' 00:12:19.233 17:15:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.233 17:15:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.233 17:15:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.233 17:15:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.233 17:15:49 -- common/autotest_common.sh@10 -- # set +x 00:12:19.233 [2024-04-25 17:15:49.183691] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:19.233 [2024-04-25 17:15:49.183810] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.492 [2024-04-25 17:15:49.320602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.492 [2024-04-25 17:15:49.377748] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.492 [2024-04-25 17:15:49.377791] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.492 [2024-04-25 17:15:49.377803] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.492 [2024-04-25 17:15:49.377811] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.492 [2024-04-25 17:15:49.377818] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.492 [2024-04-25 17:15:49.377852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.427 17:15:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:20.427 17:15:50 -- common/autotest_common.sh@850 -- # return 0 00:12:20.427 17:15:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:20.427 17:15:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 17:15:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.427 17:15:50 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 [2024-04-25 17:15:50.238747] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 [2024-04-25 17:15:50.254854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 NULL1 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:20.427 17:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.427 17:15:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.427 17:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.427 17:15:50 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:20.427 [2024-04-25 17:15:50.308285] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:20.427 [2024-04-25 17:15:50.308339] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:12:20.994 Attached to nqn.2016-06.io.spdk:cnode1 00:12:20.994 Namespace ID: 1 size: 1GB 00:12:20.994 fused_ordering(0) 00:12:20.994 fused_ordering(1) 00:12:20.994 fused_ordering(2) 00:12:20.994 fused_ordering(3) 00:12:20.994 fused_ordering(4) 00:12:20.994 fused_ordering(5) 00:12:20.994 fused_ordering(6) 00:12:20.994 fused_ordering(7) 00:12:20.994 fused_ordering(8) 00:12:20.994 fused_ordering(9) 00:12:20.994 fused_ordering(10) 00:12:20.994 fused_ordering(11) 00:12:20.994 fused_ordering(12) 00:12:20.994 fused_ordering(13) 00:12:20.994 fused_ordering(14) 00:12:20.994 fused_ordering(15) 00:12:20.994 fused_ordering(16) 00:12:20.994 fused_ordering(17) 00:12:20.994 fused_ordering(18) 00:12:20.994 fused_ordering(19) 00:12:20.994 fused_ordering(20) 00:12:20.994 fused_ordering(21) 00:12:20.994 fused_ordering(22) 00:12:20.994 fused_ordering(23) 00:12:20.994 fused_ordering(24) 00:12:20.994 fused_ordering(25) 00:12:20.994 fused_ordering(26) 00:12:20.994 fused_ordering(27) 00:12:20.994 fused_ordering(28) 00:12:20.994 fused_ordering(29) 00:12:20.994 fused_ordering(30) 00:12:20.994 fused_ordering(31) 00:12:20.994 fused_ordering(32) 00:12:20.994 fused_ordering(33) 00:12:20.994 fused_ordering(34) 00:12:20.994 fused_ordering(35) 00:12:20.994 fused_ordering(36) 00:12:20.994 fused_ordering(37) 00:12:20.994 fused_ordering(38) 00:12:20.994 fused_ordering(39) 00:12:20.994 fused_ordering(40) 00:12:20.994 fused_ordering(41) 00:12:20.994 fused_ordering(42) 00:12:20.994 fused_ordering(43) 00:12:20.994 fused_ordering(44) 00:12:20.994 fused_ordering(45) 00:12:20.994 fused_ordering(46) 00:12:20.994 fused_ordering(47) 00:12:20.994 fused_ordering(48) 00:12:20.994 fused_ordering(49) 00:12:20.994 fused_ordering(50) 00:12:20.994 fused_ordering(51) 00:12:20.994 fused_ordering(52) 00:12:20.994 fused_ordering(53) 00:12:20.994 fused_ordering(54) 00:12:20.994 fused_ordering(55) 00:12:20.994 fused_ordering(56) 00:12:20.994 fused_ordering(57) 00:12:20.994 fused_ordering(58) 00:12:20.994 fused_ordering(59) 00:12:20.994 fused_ordering(60) 00:12:20.994 fused_ordering(61) 00:12:20.994 fused_ordering(62) 00:12:20.994 fused_ordering(63) 00:12:20.994 fused_ordering(64) 00:12:20.994 fused_ordering(65) 00:12:20.994 fused_ordering(66) 00:12:20.994 fused_ordering(67) 00:12:20.994 fused_ordering(68) 00:12:20.994 fused_ordering(69) 00:12:20.994 fused_ordering(70) 00:12:20.994 fused_ordering(71) 00:12:20.994 fused_ordering(72) 00:12:20.994 fused_ordering(73) 00:12:20.994 fused_ordering(74) 00:12:20.994 fused_ordering(75) 00:12:20.994 fused_ordering(76) 00:12:20.994 fused_ordering(77) 00:12:20.994 fused_ordering(78) 00:12:20.994 fused_ordering(79) 00:12:20.994 fused_ordering(80) 00:12:20.994 fused_ordering(81) 00:12:20.994 fused_ordering(82) 00:12:20.994 fused_ordering(83) 00:12:20.994 fused_ordering(84) 00:12:20.994 fused_ordering(85) 00:12:20.994 fused_ordering(86) 00:12:20.994 fused_ordering(87) 00:12:20.994 fused_ordering(88) 00:12:20.994 fused_ordering(89) 00:12:20.994 fused_ordering(90) 00:12:20.994 fused_ordering(91) 00:12:20.994 fused_ordering(92) 00:12:20.994 fused_ordering(93) 00:12:20.994 fused_ordering(94) 00:12:20.994 fused_ordering(95) 00:12:20.994 fused_ordering(96) 00:12:20.994 fused_ordering(97) 00:12:20.994 fused_ordering(98) 00:12:20.994 fused_ordering(99) 00:12:20.994 fused_ordering(100) 00:12:20.994 fused_ordering(101) 00:12:20.994 fused_ordering(102) 00:12:20.994 fused_ordering(103) 00:12:20.994 fused_ordering(104) 00:12:20.994 fused_ordering(105) 00:12:20.994 fused_ordering(106) 00:12:20.994 fused_ordering(107) 00:12:20.994 fused_ordering(108) 00:12:20.994 fused_ordering(109) 00:12:20.994 fused_ordering(110) 00:12:20.994 fused_ordering(111) 00:12:20.994 fused_ordering(112) 00:12:20.994 fused_ordering(113) 00:12:20.994 fused_ordering(114) 00:12:20.994 fused_ordering(115) 00:12:20.994 fused_ordering(116) 00:12:20.994 fused_ordering(117) 00:12:20.994 fused_ordering(118) 00:12:20.994 fused_ordering(119) 00:12:20.994 fused_ordering(120) 00:12:20.994 fused_ordering(121) 00:12:20.994 fused_ordering(122) 00:12:20.994 fused_ordering(123) 00:12:20.994 fused_ordering(124) 00:12:20.994 fused_ordering(125) 00:12:20.994 fused_ordering(126) 00:12:20.994 fused_ordering(127) 00:12:20.994 fused_ordering(128) 00:12:20.994 fused_ordering(129) 00:12:20.994 fused_ordering(130) 00:12:20.994 fused_ordering(131) 00:12:20.994 fused_ordering(132) 00:12:20.994 fused_ordering(133) 00:12:20.994 fused_ordering(134) 00:12:20.994 fused_ordering(135) 00:12:20.994 fused_ordering(136) 00:12:20.994 fused_ordering(137) 00:12:20.994 fused_ordering(138) 00:12:20.994 fused_ordering(139) 00:12:20.994 fused_ordering(140) 00:12:20.994 fused_ordering(141) 00:12:20.994 fused_ordering(142) 00:12:20.994 fused_ordering(143) 00:12:20.994 fused_ordering(144) 00:12:20.994 fused_ordering(145) 00:12:20.994 fused_ordering(146) 00:12:20.994 fused_ordering(147) 00:12:20.994 fused_ordering(148) 00:12:20.994 fused_ordering(149) 00:12:20.994 fused_ordering(150) 00:12:20.994 fused_ordering(151) 00:12:20.994 fused_ordering(152) 00:12:20.994 fused_ordering(153) 00:12:20.994 fused_ordering(154) 00:12:20.994 fused_ordering(155) 00:12:20.994 fused_ordering(156) 00:12:20.994 fused_ordering(157) 00:12:20.994 fused_ordering(158) 00:12:20.994 fused_ordering(159) 00:12:20.994 fused_ordering(160) 00:12:20.994 fused_ordering(161) 00:12:20.994 fused_ordering(162) 00:12:20.994 fused_ordering(163) 00:12:20.994 fused_ordering(164) 00:12:20.994 fused_ordering(165) 00:12:20.994 fused_ordering(166) 00:12:20.994 fused_ordering(167) 00:12:20.994 fused_ordering(168) 00:12:20.994 fused_ordering(169) 00:12:20.994 fused_ordering(170) 00:12:20.994 fused_ordering(171) 00:12:20.994 fused_ordering(172) 00:12:20.994 fused_ordering(173) 00:12:20.994 fused_ordering(174) 00:12:20.994 fused_ordering(175) 00:12:20.994 fused_ordering(176) 00:12:20.994 fused_ordering(177) 00:12:20.994 fused_ordering(178) 00:12:20.994 fused_ordering(179) 00:12:20.994 fused_ordering(180) 00:12:20.994 fused_ordering(181) 00:12:20.994 fused_ordering(182) 00:12:20.994 fused_ordering(183) 00:12:20.994 fused_ordering(184) 00:12:20.994 fused_ordering(185) 00:12:20.994 fused_ordering(186) 00:12:20.994 fused_ordering(187) 00:12:20.994 fused_ordering(188) 00:12:20.994 fused_ordering(189) 00:12:20.994 fused_ordering(190) 00:12:20.994 fused_ordering(191) 00:12:20.994 fused_ordering(192) 00:12:20.994 fused_ordering(193) 00:12:20.994 fused_ordering(194) 00:12:20.994 fused_ordering(195) 00:12:20.994 fused_ordering(196) 00:12:20.994 fused_ordering(197) 00:12:20.994 fused_ordering(198) 00:12:20.994 fused_ordering(199) 00:12:20.994 fused_ordering(200) 00:12:20.994 fused_ordering(201) 00:12:20.994 fused_ordering(202) 00:12:20.994 fused_ordering(203) 00:12:20.994 fused_ordering(204) 00:12:20.994 fused_ordering(205) 00:12:21.253 fused_ordering(206) 00:12:21.253 fused_ordering(207) 00:12:21.253 fused_ordering(208) 00:12:21.253 fused_ordering(209) 00:12:21.253 fused_ordering(210) 00:12:21.253 fused_ordering(211) 00:12:21.253 fused_ordering(212) 00:12:21.253 fused_ordering(213) 00:12:21.253 fused_ordering(214) 00:12:21.253 fused_ordering(215) 00:12:21.253 fused_ordering(216) 00:12:21.253 fused_ordering(217) 00:12:21.253 fused_ordering(218) 00:12:21.253 fused_ordering(219) 00:12:21.253 fused_ordering(220) 00:12:21.253 fused_ordering(221) 00:12:21.253 fused_ordering(222) 00:12:21.253 fused_ordering(223) 00:12:21.253 fused_ordering(224) 00:12:21.253 fused_ordering(225) 00:12:21.253 fused_ordering(226) 00:12:21.253 fused_ordering(227) 00:12:21.253 fused_ordering(228) 00:12:21.253 fused_ordering(229) 00:12:21.253 fused_ordering(230) 00:12:21.253 fused_ordering(231) 00:12:21.253 fused_ordering(232) 00:12:21.253 fused_ordering(233) 00:12:21.253 fused_ordering(234) 00:12:21.253 fused_ordering(235) 00:12:21.253 fused_ordering(236) 00:12:21.253 fused_ordering(237) 00:12:21.253 fused_ordering(238) 00:12:21.253 fused_ordering(239) 00:12:21.253 fused_ordering(240) 00:12:21.253 fused_ordering(241) 00:12:21.253 fused_ordering(242) 00:12:21.253 fused_ordering(243) 00:12:21.253 fused_ordering(244) 00:12:21.253 fused_ordering(245) 00:12:21.253 fused_ordering(246) 00:12:21.253 fused_ordering(247) 00:12:21.253 fused_ordering(248) 00:12:21.253 fused_ordering(249) 00:12:21.253 fused_ordering(250) 00:12:21.253 fused_ordering(251) 00:12:21.253 fused_ordering(252) 00:12:21.253 fused_ordering(253) 00:12:21.253 fused_ordering(254) 00:12:21.253 fused_ordering(255) 00:12:21.253 fused_ordering(256) 00:12:21.253 fused_ordering(257) 00:12:21.253 fused_ordering(258) 00:12:21.253 fused_ordering(259) 00:12:21.253 fused_ordering(260) 00:12:21.253 fused_ordering(261) 00:12:21.253 fused_ordering(262) 00:12:21.253 fused_ordering(263) 00:12:21.253 fused_ordering(264) 00:12:21.253 fused_ordering(265) 00:12:21.253 fused_ordering(266) 00:12:21.253 fused_ordering(267) 00:12:21.253 fused_ordering(268) 00:12:21.253 fused_ordering(269) 00:12:21.253 fused_ordering(270) 00:12:21.253 fused_ordering(271) 00:12:21.253 fused_ordering(272) 00:12:21.253 fused_ordering(273) 00:12:21.253 fused_ordering(274) 00:12:21.253 fused_ordering(275) 00:12:21.253 fused_ordering(276) 00:12:21.253 fused_ordering(277) 00:12:21.253 fused_ordering(278) 00:12:21.253 fused_ordering(279) 00:12:21.253 fused_ordering(280) 00:12:21.253 fused_ordering(281) 00:12:21.253 fused_ordering(282) 00:12:21.253 fused_ordering(283) 00:12:21.253 fused_ordering(284) 00:12:21.253 fused_ordering(285) 00:12:21.253 fused_ordering(286) 00:12:21.253 fused_ordering(287) 00:12:21.253 fused_ordering(288) 00:12:21.253 fused_ordering(289) 00:12:21.253 fused_ordering(290) 00:12:21.253 fused_ordering(291) 00:12:21.253 fused_ordering(292) 00:12:21.253 fused_ordering(293) 00:12:21.253 fused_ordering(294) 00:12:21.253 fused_ordering(295) 00:12:21.253 fused_ordering(296) 00:12:21.253 fused_ordering(297) 00:12:21.253 fused_ordering(298) 00:12:21.253 fused_ordering(299) 00:12:21.253 fused_ordering(300) 00:12:21.253 fused_ordering(301) 00:12:21.253 fused_ordering(302) 00:12:21.253 fused_ordering(303) 00:12:21.253 fused_ordering(304) 00:12:21.253 fused_ordering(305) 00:12:21.253 fused_ordering(306) 00:12:21.253 fused_ordering(307) 00:12:21.253 fused_ordering(308) 00:12:21.253 fused_ordering(309) 00:12:21.253 fused_ordering(310) 00:12:21.253 fused_ordering(311) 00:12:21.253 fused_ordering(312) 00:12:21.253 fused_ordering(313) 00:12:21.253 fused_ordering(314) 00:12:21.253 fused_ordering(315) 00:12:21.253 fused_ordering(316) 00:12:21.253 fused_ordering(317) 00:12:21.253 fused_ordering(318) 00:12:21.253 fused_ordering(319) 00:12:21.253 fused_ordering(320) 00:12:21.253 fused_ordering(321) 00:12:21.253 fused_ordering(322) 00:12:21.253 fused_ordering(323) 00:12:21.253 fused_ordering(324) 00:12:21.253 fused_ordering(325) 00:12:21.253 fused_ordering(326) 00:12:21.253 fused_ordering(327) 00:12:21.253 fused_ordering(328) 00:12:21.253 fused_ordering(329) 00:12:21.253 fused_ordering(330) 00:12:21.253 fused_ordering(331) 00:12:21.253 fused_ordering(332) 00:12:21.253 fused_ordering(333) 00:12:21.253 fused_ordering(334) 00:12:21.253 fused_ordering(335) 00:12:21.253 fused_ordering(336) 00:12:21.253 fused_ordering(337) 00:12:21.253 fused_ordering(338) 00:12:21.253 fused_ordering(339) 00:12:21.253 fused_ordering(340) 00:12:21.253 fused_ordering(341) 00:12:21.253 fused_ordering(342) 00:12:21.253 fused_ordering(343) 00:12:21.253 fused_ordering(344) 00:12:21.253 fused_ordering(345) 00:12:21.253 fused_ordering(346) 00:12:21.253 fused_ordering(347) 00:12:21.253 fused_ordering(348) 00:12:21.253 fused_ordering(349) 00:12:21.253 fused_ordering(350) 00:12:21.253 fused_ordering(351) 00:12:21.253 fused_ordering(352) 00:12:21.253 fused_ordering(353) 00:12:21.253 fused_ordering(354) 00:12:21.253 fused_ordering(355) 00:12:21.253 fused_ordering(356) 00:12:21.253 fused_ordering(357) 00:12:21.253 fused_ordering(358) 00:12:21.253 fused_ordering(359) 00:12:21.253 fused_ordering(360) 00:12:21.253 fused_ordering(361) 00:12:21.253 fused_ordering(362) 00:12:21.253 fused_ordering(363) 00:12:21.253 fused_ordering(364) 00:12:21.253 fused_ordering(365) 00:12:21.253 fused_ordering(366) 00:12:21.253 fused_ordering(367) 00:12:21.253 fused_ordering(368) 00:12:21.253 fused_ordering(369) 00:12:21.253 fused_ordering(370) 00:12:21.253 fused_ordering(371) 00:12:21.253 fused_ordering(372) 00:12:21.253 fused_ordering(373) 00:12:21.253 fused_ordering(374) 00:12:21.253 fused_ordering(375) 00:12:21.253 fused_ordering(376) 00:12:21.253 fused_ordering(377) 00:12:21.253 fused_ordering(378) 00:12:21.253 fused_ordering(379) 00:12:21.253 fused_ordering(380) 00:12:21.253 fused_ordering(381) 00:12:21.253 fused_ordering(382) 00:12:21.253 fused_ordering(383) 00:12:21.253 fused_ordering(384) 00:12:21.253 fused_ordering(385) 00:12:21.253 fused_ordering(386) 00:12:21.253 fused_ordering(387) 00:12:21.253 fused_ordering(388) 00:12:21.253 fused_ordering(389) 00:12:21.253 fused_ordering(390) 00:12:21.253 fused_ordering(391) 00:12:21.253 fused_ordering(392) 00:12:21.253 fused_ordering(393) 00:12:21.253 fused_ordering(394) 00:12:21.253 fused_ordering(395) 00:12:21.253 fused_ordering(396) 00:12:21.253 fused_ordering(397) 00:12:21.253 fused_ordering(398) 00:12:21.253 fused_ordering(399) 00:12:21.253 fused_ordering(400) 00:12:21.253 fused_ordering(401) 00:12:21.253 fused_ordering(402) 00:12:21.253 fused_ordering(403) 00:12:21.253 fused_ordering(404) 00:12:21.253 fused_ordering(405) 00:12:21.253 fused_ordering(406) 00:12:21.253 fused_ordering(407) 00:12:21.253 fused_ordering(408) 00:12:21.253 fused_ordering(409) 00:12:21.253 fused_ordering(410) 00:12:21.512 fused_ordering(411) 00:12:21.512 fused_ordering(412) 00:12:21.512 fused_ordering(413) 00:12:21.512 fused_ordering(414) 00:12:21.512 fused_ordering(415) 00:12:21.512 fused_ordering(416) 00:12:21.512 fused_ordering(417) 00:12:21.512 fused_ordering(418) 00:12:21.512 fused_ordering(419) 00:12:21.512 fused_ordering(420) 00:12:21.512 fused_ordering(421) 00:12:21.512 fused_ordering(422) 00:12:21.512 fused_ordering(423) 00:12:21.512 fused_ordering(424) 00:12:21.512 fused_ordering(425) 00:12:21.512 fused_ordering(426) 00:12:21.512 fused_ordering(427) 00:12:21.512 fused_ordering(428) 00:12:21.512 fused_ordering(429) 00:12:21.512 fused_ordering(430) 00:12:21.512 fused_ordering(431) 00:12:21.512 fused_ordering(432) 00:12:21.512 fused_ordering(433) 00:12:21.512 fused_ordering(434) 00:12:21.512 fused_ordering(435) 00:12:21.512 fused_ordering(436) 00:12:21.512 fused_ordering(437) 00:12:21.512 fused_ordering(438) 00:12:21.512 fused_ordering(439) 00:12:21.512 fused_ordering(440) 00:12:21.512 fused_ordering(441) 00:12:21.512 fused_ordering(442) 00:12:21.512 fused_ordering(443) 00:12:21.512 fused_ordering(444) 00:12:21.512 fused_ordering(445) 00:12:21.512 fused_ordering(446) 00:12:21.512 fused_ordering(447) 00:12:21.512 fused_ordering(448) 00:12:21.512 fused_ordering(449) 00:12:21.512 fused_ordering(450) 00:12:21.512 fused_ordering(451) 00:12:21.512 fused_ordering(452) 00:12:21.512 fused_ordering(453) 00:12:21.512 fused_ordering(454) 00:12:21.512 fused_ordering(455) 00:12:21.512 fused_ordering(456) 00:12:21.512 fused_ordering(457) 00:12:21.512 fused_ordering(458) 00:12:21.512 fused_ordering(459) 00:12:21.512 fused_ordering(460) 00:12:21.512 fused_ordering(461) 00:12:21.512 fused_ordering(462) 00:12:21.512 fused_ordering(463) 00:12:21.512 fused_ordering(464) 00:12:21.512 fused_ordering(465) 00:12:21.512 fused_ordering(466) 00:12:21.512 fused_ordering(467) 00:12:21.512 fused_ordering(468) 00:12:21.512 fused_ordering(469) 00:12:21.512 fused_ordering(470) 00:12:21.512 fused_ordering(471) 00:12:21.512 fused_ordering(472) 00:12:21.512 fused_ordering(473) 00:12:21.512 fused_ordering(474) 00:12:21.512 fused_ordering(475) 00:12:21.512 fused_ordering(476) 00:12:21.512 fused_ordering(477) 00:12:21.512 fused_ordering(478) 00:12:21.512 fused_ordering(479) 00:12:21.512 fused_ordering(480) 00:12:21.512 fused_ordering(481) 00:12:21.512 fused_ordering(482) 00:12:21.512 fused_ordering(483) 00:12:21.512 fused_ordering(484) 00:12:21.512 fused_ordering(485) 00:12:21.512 fused_ordering(486) 00:12:21.512 fused_ordering(487) 00:12:21.512 fused_ordering(488) 00:12:21.512 fused_ordering(489) 00:12:21.512 fused_ordering(490) 00:12:21.512 fused_ordering(491) 00:12:21.512 fused_ordering(492) 00:12:21.512 fused_ordering(493) 00:12:21.512 fused_ordering(494) 00:12:21.512 fused_ordering(495) 00:12:21.512 fused_ordering(496) 00:12:21.512 fused_ordering(497) 00:12:21.512 fused_ordering(498) 00:12:21.512 fused_ordering(499) 00:12:21.512 fused_ordering(500) 00:12:21.512 fused_ordering(501) 00:12:21.512 fused_ordering(502) 00:12:21.512 fused_ordering(503) 00:12:21.512 fused_ordering(504) 00:12:21.512 fused_ordering(505) 00:12:21.512 fused_ordering(506) 00:12:21.512 fused_ordering(507) 00:12:21.512 fused_ordering(508) 00:12:21.512 fused_ordering(509) 00:12:21.512 fused_ordering(510) 00:12:21.512 fused_ordering(511) 00:12:21.512 fused_ordering(512) 00:12:21.512 fused_ordering(513) 00:12:21.512 fused_ordering(514) 00:12:21.512 fused_ordering(515) 00:12:21.512 fused_ordering(516) 00:12:21.512 fused_ordering(517) 00:12:21.512 fused_ordering(518) 00:12:21.512 fused_ordering(519) 00:12:21.512 fused_ordering(520) 00:12:21.512 fused_ordering(521) 00:12:21.512 fused_ordering(522) 00:12:21.512 fused_ordering(523) 00:12:21.512 fused_ordering(524) 00:12:21.512 fused_ordering(525) 00:12:21.512 fused_ordering(526) 00:12:21.512 fused_ordering(527) 00:12:21.512 fused_ordering(528) 00:12:21.512 fused_ordering(529) 00:12:21.512 fused_ordering(530) 00:12:21.512 fused_ordering(531) 00:12:21.512 fused_ordering(532) 00:12:21.512 fused_ordering(533) 00:12:21.512 fused_ordering(534) 00:12:21.512 fused_ordering(535) 00:12:21.512 fused_ordering(536) 00:12:21.512 fused_ordering(537) 00:12:21.512 fused_ordering(538) 00:12:21.512 fused_ordering(539) 00:12:21.512 fused_ordering(540) 00:12:21.512 fused_ordering(541) 00:12:21.512 fused_ordering(542) 00:12:21.512 fused_ordering(543) 00:12:21.512 fused_ordering(544) 00:12:21.512 fused_ordering(545) 00:12:21.512 fused_ordering(546) 00:12:21.512 fused_ordering(547) 00:12:21.512 fused_ordering(548) 00:12:21.512 fused_ordering(549) 00:12:21.512 fused_ordering(550) 00:12:21.512 fused_ordering(551) 00:12:21.512 fused_ordering(552) 00:12:21.512 fused_ordering(553) 00:12:21.512 fused_ordering(554) 00:12:21.512 fused_ordering(555) 00:12:21.512 fused_ordering(556) 00:12:21.512 fused_ordering(557) 00:12:21.512 fused_ordering(558) 00:12:21.512 fused_ordering(559) 00:12:21.512 fused_ordering(560) 00:12:21.512 fused_ordering(561) 00:12:21.512 fused_ordering(562) 00:12:21.512 fused_ordering(563) 00:12:21.512 fused_ordering(564) 00:12:21.512 fused_ordering(565) 00:12:21.512 fused_ordering(566) 00:12:21.512 fused_ordering(567) 00:12:21.512 fused_ordering(568) 00:12:21.512 fused_ordering(569) 00:12:21.512 fused_ordering(570) 00:12:21.512 fused_ordering(571) 00:12:21.512 fused_ordering(572) 00:12:21.512 fused_ordering(573) 00:12:21.512 fused_ordering(574) 00:12:21.513 fused_ordering(575) 00:12:21.513 fused_ordering(576) 00:12:21.513 fused_ordering(577) 00:12:21.513 fused_ordering(578) 00:12:21.513 fused_ordering(579) 00:12:21.513 fused_ordering(580) 00:12:21.513 fused_ordering(581) 00:12:21.513 fused_ordering(582) 00:12:21.513 fused_ordering(583) 00:12:21.513 fused_ordering(584) 00:12:21.513 fused_ordering(585) 00:12:21.513 fused_ordering(586) 00:12:21.513 fused_ordering(587) 00:12:21.513 fused_ordering(588) 00:12:21.513 fused_ordering(589) 00:12:21.513 fused_ordering(590) 00:12:21.513 fused_ordering(591) 00:12:21.513 fused_ordering(592) 00:12:21.513 fused_ordering(593) 00:12:21.513 fused_ordering(594) 00:12:21.513 fused_ordering(595) 00:12:21.513 fused_ordering(596) 00:12:21.513 fused_ordering(597) 00:12:21.513 fused_ordering(598) 00:12:21.513 fused_ordering(599) 00:12:21.513 fused_ordering(600) 00:12:21.513 fused_ordering(601) 00:12:21.513 fused_ordering(602) 00:12:21.513 fused_ordering(603) 00:12:21.513 fused_ordering(604) 00:12:21.513 fused_ordering(605) 00:12:21.513 fused_ordering(606) 00:12:21.513 fused_ordering(607) 00:12:21.513 fused_ordering(608) 00:12:21.513 fused_ordering(609) 00:12:21.513 fused_ordering(610) 00:12:21.513 fused_ordering(611) 00:12:21.513 fused_ordering(612) 00:12:21.513 fused_ordering(613) 00:12:21.513 fused_ordering(614) 00:12:21.513 fused_ordering(615) 00:12:22.080 fused_ordering(616) 00:12:22.080 fused_ordering(617) 00:12:22.080 fused_ordering(618) 00:12:22.080 fused_ordering(619) 00:12:22.080 fused_ordering(620) 00:12:22.080 fused_ordering(621) 00:12:22.080 fused_ordering(622) 00:12:22.080 fused_ordering(623) 00:12:22.080 fused_ordering(624) 00:12:22.080 fused_ordering(625) 00:12:22.080 fused_ordering(626) 00:12:22.080 fused_ordering(627) 00:12:22.080 fused_ordering(628) 00:12:22.080 fused_ordering(629) 00:12:22.080 fused_ordering(630) 00:12:22.080 fused_ordering(631) 00:12:22.080 fused_ordering(632) 00:12:22.080 fused_ordering(633) 00:12:22.080 fused_ordering(634) 00:12:22.080 fused_ordering(635) 00:12:22.080 fused_ordering(636) 00:12:22.080 fused_ordering(637) 00:12:22.080 fused_ordering(638) 00:12:22.080 fused_ordering(639) 00:12:22.080 fused_ordering(640) 00:12:22.080 fused_ordering(641) 00:12:22.080 fused_ordering(642) 00:12:22.080 fused_ordering(643) 00:12:22.080 fused_ordering(644) 00:12:22.080 fused_ordering(645) 00:12:22.080 fused_ordering(646) 00:12:22.080 fused_ordering(647) 00:12:22.080 fused_ordering(648) 00:12:22.080 fused_ordering(649) 00:12:22.081 fused_ordering(650) 00:12:22.081 fused_ordering(651) 00:12:22.081 fused_ordering(652) 00:12:22.081 fused_ordering(653) 00:12:22.081 fused_ordering(654) 00:12:22.081 fused_ordering(655) 00:12:22.081 fused_ordering(656) 00:12:22.081 fused_ordering(657) 00:12:22.081 fused_ordering(658) 00:12:22.081 fused_ordering(659) 00:12:22.081 fused_ordering(660) 00:12:22.081 fused_ordering(661) 00:12:22.081 fused_ordering(662) 00:12:22.081 fused_ordering(663) 00:12:22.081 fused_ordering(664) 00:12:22.081 fused_ordering(665) 00:12:22.081 fused_ordering(666) 00:12:22.081 fused_ordering(667) 00:12:22.081 fused_ordering(668) 00:12:22.081 fused_ordering(669) 00:12:22.081 fused_ordering(670) 00:12:22.081 fused_ordering(671) 00:12:22.081 fused_ordering(672) 00:12:22.081 fused_ordering(673) 00:12:22.081 fused_ordering(674) 00:12:22.081 fused_ordering(675) 00:12:22.081 fused_ordering(676) 00:12:22.081 fused_ordering(677) 00:12:22.081 fused_ordering(678) 00:12:22.081 fused_ordering(679) 00:12:22.081 fused_ordering(680) 00:12:22.081 fused_ordering(681) 00:12:22.081 fused_ordering(682) 00:12:22.081 fused_ordering(683) 00:12:22.081 fused_ordering(684) 00:12:22.081 fused_ordering(685) 00:12:22.081 fused_ordering(686) 00:12:22.081 fused_ordering(687) 00:12:22.081 fused_ordering(688) 00:12:22.081 fused_ordering(689) 00:12:22.081 fused_ordering(690) 00:12:22.081 fused_ordering(691) 00:12:22.081 fused_ordering(692) 00:12:22.081 fused_ordering(693) 00:12:22.081 fused_ordering(694) 00:12:22.081 fused_ordering(695) 00:12:22.081 fused_ordering(696) 00:12:22.081 fused_ordering(697) 00:12:22.081 fused_ordering(698) 00:12:22.081 fused_ordering(699) 00:12:22.081 fused_ordering(700) 00:12:22.081 fused_ordering(701) 00:12:22.081 fused_ordering(702) 00:12:22.081 fused_ordering(703) 00:12:22.081 fused_ordering(704) 00:12:22.081 fused_ordering(705) 00:12:22.081 fused_ordering(706) 00:12:22.081 fused_ordering(707) 00:12:22.081 fused_ordering(708) 00:12:22.081 fused_ordering(709) 00:12:22.081 fused_ordering(710) 00:12:22.081 fused_ordering(711) 00:12:22.081 fused_ordering(712) 00:12:22.081 fused_ordering(713) 00:12:22.081 fused_ordering(714) 00:12:22.081 fused_ordering(715) 00:12:22.081 fused_ordering(716) 00:12:22.081 fused_ordering(717) 00:12:22.081 fused_ordering(718) 00:12:22.081 fused_ordering(719) 00:12:22.081 fused_ordering(720) 00:12:22.081 fused_ordering(721) 00:12:22.081 fused_ordering(722) 00:12:22.081 fused_ordering(723) 00:12:22.081 fused_ordering(724) 00:12:22.081 fused_ordering(725) 00:12:22.081 fused_ordering(726) 00:12:22.081 fused_ordering(727) 00:12:22.081 fused_ordering(728) 00:12:22.081 fused_ordering(729) 00:12:22.081 fused_ordering(730) 00:12:22.081 fused_ordering(731) 00:12:22.081 fused_ordering(732) 00:12:22.081 fused_ordering(733) 00:12:22.081 fused_ordering(734) 00:12:22.081 fused_ordering(735) 00:12:22.081 fused_ordering(736) 00:12:22.081 fused_ordering(737) 00:12:22.081 fused_ordering(738) 00:12:22.081 fused_ordering(739) 00:12:22.081 fused_ordering(740) 00:12:22.081 fused_ordering(741) 00:12:22.081 fused_ordering(742) 00:12:22.081 fused_ordering(743) 00:12:22.081 fused_ordering(744) 00:12:22.081 fused_ordering(745) 00:12:22.081 fused_ordering(746) 00:12:22.081 fused_ordering(747) 00:12:22.081 fused_ordering(748) 00:12:22.081 fused_ordering(749) 00:12:22.081 fused_ordering(750) 00:12:22.081 fused_ordering(751) 00:12:22.081 fused_ordering(752) 00:12:22.081 fused_ordering(753) 00:12:22.081 fused_ordering(754) 00:12:22.081 fused_ordering(755) 00:12:22.081 fused_ordering(756) 00:12:22.081 fused_ordering(757) 00:12:22.081 fused_ordering(758) 00:12:22.081 fused_ordering(759) 00:12:22.081 fused_ordering(760) 00:12:22.081 fused_ordering(761) 00:12:22.081 fused_ordering(762) 00:12:22.081 fused_ordering(763) 00:12:22.081 fused_ordering(764) 00:12:22.081 fused_ordering(765) 00:12:22.081 fused_ordering(766) 00:12:22.081 fused_ordering(767) 00:12:22.081 fused_ordering(768) 00:12:22.081 fused_ordering(769) 00:12:22.081 fused_ordering(770) 00:12:22.081 fused_ordering(771) 00:12:22.081 fused_ordering(772) 00:12:22.081 fused_ordering(773) 00:12:22.081 fused_ordering(774) 00:12:22.081 fused_ordering(775) 00:12:22.081 fused_ordering(776) 00:12:22.081 fused_ordering(777) 00:12:22.081 fused_ordering(778) 00:12:22.081 fused_ordering(779) 00:12:22.081 fused_ordering(780) 00:12:22.081 fused_ordering(781) 00:12:22.081 fused_ordering(782) 00:12:22.081 fused_ordering(783) 00:12:22.081 fused_ordering(784) 00:12:22.081 fused_ordering(785) 00:12:22.081 fused_ordering(786) 00:12:22.081 fused_ordering(787) 00:12:22.081 fused_ordering(788) 00:12:22.081 fused_ordering(789) 00:12:22.081 fused_ordering(790) 00:12:22.081 fused_ordering(791) 00:12:22.081 fused_ordering(792) 00:12:22.081 fused_ordering(793) 00:12:22.081 fused_ordering(794) 00:12:22.081 fused_ordering(795) 00:12:22.081 fused_ordering(796) 00:12:22.081 fused_ordering(797) 00:12:22.081 fused_ordering(798) 00:12:22.081 fused_ordering(799) 00:12:22.081 fused_ordering(800) 00:12:22.081 fused_ordering(801) 00:12:22.081 fused_ordering(802) 00:12:22.081 fused_ordering(803) 00:12:22.081 fused_ordering(804) 00:12:22.081 fused_ordering(805) 00:12:22.081 fused_ordering(806) 00:12:22.081 fused_ordering(807) 00:12:22.081 fused_ordering(808) 00:12:22.081 fused_ordering(809) 00:12:22.081 fused_ordering(810) 00:12:22.081 fused_ordering(811) 00:12:22.081 fused_ordering(812) 00:12:22.081 fused_ordering(813) 00:12:22.081 fused_ordering(814) 00:12:22.081 fused_ordering(815) 00:12:22.081 fused_ordering(816) 00:12:22.081 fused_ordering(817) 00:12:22.081 fused_ordering(818) 00:12:22.081 fused_ordering(819) 00:12:22.081 fused_ordering(820) 00:12:22.650 fused_ordering(821) 00:12:22.650 fused_ordering(822) 00:12:22.650 fused_ordering(823) 00:12:22.650 fused_ordering(824) 00:12:22.650 fused_ordering(825) 00:12:22.650 fused_ordering(826) 00:12:22.650 fused_ordering(827) 00:12:22.650 fused_ordering(828) 00:12:22.650 fused_ordering(829) 00:12:22.650 fused_ordering(830) 00:12:22.650 fused_ordering(831) 00:12:22.650 fused_ordering(832) 00:12:22.650 fused_ordering(833) 00:12:22.650 fused_ordering(834) 00:12:22.650 fused_ordering(835) 00:12:22.650 fused_ordering(836) 00:12:22.650 fused_ordering(837) 00:12:22.650 fused_ordering(838) 00:12:22.650 fused_ordering(839) 00:12:22.650 fused_ordering(840) 00:12:22.650 fused_ordering(841) 00:12:22.650 fused_ordering(842) 00:12:22.650 fused_ordering(843) 00:12:22.650 fused_ordering(844) 00:12:22.650 fused_ordering(845) 00:12:22.650 fused_ordering(846) 00:12:22.650 fused_ordering(847) 00:12:22.650 fused_ordering(848) 00:12:22.650 fused_ordering(849) 00:12:22.650 fused_ordering(850) 00:12:22.650 fused_ordering(851) 00:12:22.650 fused_ordering(852) 00:12:22.650 fused_ordering(853) 00:12:22.650 fused_ordering(854) 00:12:22.650 fused_ordering(855) 00:12:22.650 fused_ordering(856) 00:12:22.650 fused_ordering(857) 00:12:22.650 fused_ordering(858) 00:12:22.650 fused_ordering(859) 00:12:22.650 fused_ordering(860) 00:12:22.650 fused_ordering(861) 00:12:22.650 fused_ordering(862) 00:12:22.650 fused_ordering(863) 00:12:22.650 fused_ordering(864) 00:12:22.650 fused_ordering(865) 00:12:22.650 fused_ordering(866) 00:12:22.650 fused_ordering(867) 00:12:22.650 fused_ordering(868) 00:12:22.650 fused_ordering(869) 00:12:22.650 fused_ordering(870) 00:12:22.650 fused_ordering(871) 00:12:22.650 fused_ordering(872) 00:12:22.650 fused_ordering(873) 00:12:22.650 fused_ordering(874) 00:12:22.650 fused_ordering(875) 00:12:22.650 fused_ordering(876) 00:12:22.650 fused_ordering(877) 00:12:22.650 fused_ordering(878) 00:12:22.650 fused_ordering(879) 00:12:22.650 fused_ordering(880) 00:12:22.650 fused_ordering(881) 00:12:22.650 fused_ordering(882) 00:12:22.650 fused_ordering(883) 00:12:22.650 fused_ordering(884) 00:12:22.650 fused_ordering(885) 00:12:22.650 fused_ordering(886) 00:12:22.650 fused_ordering(887) 00:12:22.650 fused_ordering(888) 00:12:22.650 fused_ordering(889) 00:12:22.650 fused_ordering(890) 00:12:22.650 fused_ordering(891) 00:12:22.650 fused_ordering(892) 00:12:22.650 fused_ordering(893) 00:12:22.650 fused_ordering(894) 00:12:22.650 fused_ordering(895) 00:12:22.650 fused_ordering(896) 00:12:22.650 fused_ordering(897) 00:12:22.650 fused_ordering(898) 00:12:22.650 fused_ordering(899) 00:12:22.650 fused_ordering(900) 00:12:22.650 fused_ordering(901) 00:12:22.650 fused_ordering(902) 00:12:22.650 fused_ordering(903) 00:12:22.650 fused_ordering(904) 00:12:22.650 fused_ordering(905) 00:12:22.650 fused_ordering(906) 00:12:22.650 fused_ordering(907) 00:12:22.650 fused_ordering(908) 00:12:22.650 fused_ordering(909) 00:12:22.650 fused_ordering(910) 00:12:22.650 fused_ordering(911) 00:12:22.650 fused_ordering(912) 00:12:22.650 fused_ordering(913) 00:12:22.650 fused_ordering(914) 00:12:22.650 fused_ordering(915) 00:12:22.650 fused_ordering(916) 00:12:22.650 fused_ordering(917) 00:12:22.650 fused_ordering(918) 00:12:22.650 fused_ordering(919) 00:12:22.650 fused_ordering(920) 00:12:22.650 fused_ordering(921) 00:12:22.650 fused_ordering(922) 00:12:22.650 fused_ordering(923) 00:12:22.650 fused_ordering(924) 00:12:22.650 fused_ordering(925) 00:12:22.650 fused_ordering(926) 00:12:22.650 fused_ordering(927) 00:12:22.650 fused_ordering(928) 00:12:22.650 fused_ordering(929) 00:12:22.650 fused_ordering(930) 00:12:22.650 fused_ordering(931) 00:12:22.650 fused_ordering(932) 00:12:22.650 fused_ordering(933) 00:12:22.650 fused_ordering(934) 00:12:22.650 fused_ordering(935) 00:12:22.650 fused_ordering(936) 00:12:22.650 fused_ordering(937) 00:12:22.650 fused_ordering(938) 00:12:22.650 fused_ordering(939) 00:12:22.650 fused_ordering(940) 00:12:22.650 fused_ordering(941) 00:12:22.650 fused_ordering(942) 00:12:22.650 fused_ordering(943) 00:12:22.650 fused_ordering(944) 00:12:22.650 fused_ordering(945) 00:12:22.650 fused_ordering(946) 00:12:22.650 fused_ordering(947) 00:12:22.650 fused_ordering(948) 00:12:22.650 fused_ordering(949) 00:12:22.650 fused_ordering(950) 00:12:22.650 fused_ordering(951) 00:12:22.650 fused_ordering(952) 00:12:22.650 fused_ordering(953) 00:12:22.650 fused_ordering(954) 00:12:22.650 fused_ordering(955) 00:12:22.650 fused_ordering(956) 00:12:22.650 fused_ordering(957) 00:12:22.650 fused_ordering(958) 00:12:22.650 fused_ordering(959) 00:12:22.650 fused_ordering(960) 00:12:22.650 fused_ordering(961) 00:12:22.650 fused_ordering(962) 00:12:22.650 fused_ordering(963) 00:12:22.650 fused_ordering(964) 00:12:22.650 fused_ordering(965) 00:12:22.650 fused_ordering(966) 00:12:22.650 fused_ordering(967) 00:12:22.650 fused_ordering(968) 00:12:22.650 fused_ordering(969) 00:12:22.650 fused_ordering(970) 00:12:22.650 fused_ordering(971) 00:12:22.650 fused_ordering(972) 00:12:22.650 fused_ordering(973) 00:12:22.650 fused_ordering(974) 00:12:22.650 fused_ordering(975) 00:12:22.650 fused_ordering(976) 00:12:22.650 fused_ordering(977) 00:12:22.650 fused_ordering(978) 00:12:22.650 fused_ordering(979) 00:12:22.650 fused_ordering(980) 00:12:22.650 fused_ordering(981) 00:12:22.650 fused_ordering(982) 00:12:22.650 fused_ordering(983) 00:12:22.650 fused_ordering(984) 00:12:22.650 fused_ordering(985) 00:12:22.650 fused_ordering(986) 00:12:22.650 fused_ordering(987) 00:12:22.650 fused_ordering(988) 00:12:22.650 fused_ordering(989) 00:12:22.650 fused_ordering(990) 00:12:22.650 fused_ordering(991) 00:12:22.650 fused_ordering(992) 00:12:22.650 fused_ordering(993) 00:12:22.650 fused_ordering(994) 00:12:22.650 fused_ordering(995) 00:12:22.650 fused_ordering(996) 00:12:22.650 fused_ordering(997) 00:12:22.650 fused_ordering(998) 00:12:22.650 fused_ordering(999) 00:12:22.650 fused_ordering(1000) 00:12:22.650 fused_ordering(1001) 00:12:22.650 fused_ordering(1002) 00:12:22.650 fused_ordering(1003) 00:12:22.650 fused_ordering(1004) 00:12:22.650 fused_ordering(1005) 00:12:22.650 fused_ordering(1006) 00:12:22.650 fused_ordering(1007) 00:12:22.650 fused_ordering(1008) 00:12:22.650 fused_ordering(1009) 00:12:22.650 fused_ordering(1010) 00:12:22.650 fused_ordering(1011) 00:12:22.650 fused_ordering(1012) 00:12:22.650 fused_ordering(1013) 00:12:22.650 fused_ordering(1014) 00:12:22.650 fused_ordering(1015) 00:12:22.650 fused_ordering(1016) 00:12:22.650 fused_ordering(1017) 00:12:22.650 fused_ordering(1018) 00:12:22.650 fused_ordering(1019) 00:12:22.650 fused_ordering(1020) 00:12:22.650 fused_ordering(1021) 00:12:22.650 fused_ordering(1022) 00:12:22.650 fused_ordering(1023) 00:12:22.650 17:15:52 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:22.650 17:15:52 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:22.650 17:15:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:22.650 17:15:52 -- nvmf/common.sh@117 -- # sync 00:12:22.650 17:15:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.650 17:15:52 -- nvmf/common.sh@120 -- # set +e 00:12:22.650 17:15:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.650 17:15:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.650 rmmod nvme_tcp 00:12:22.650 rmmod nvme_fabrics 00:12:22.650 rmmod nvme_keyring 00:12:22.650 17:15:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.650 17:15:52 -- nvmf/common.sh@124 -- # set -e 00:12:22.650 17:15:52 -- nvmf/common.sh@125 -- # return 0 00:12:22.650 17:15:52 -- nvmf/common.sh@478 -- # '[' -n 73641 ']' 00:12:22.650 17:15:52 -- nvmf/common.sh@479 -- # killprocess 73641 00:12:22.650 17:15:52 -- common/autotest_common.sh@936 -- # '[' -z 73641 ']' 00:12:22.650 17:15:52 -- common/autotest_common.sh@940 -- # kill -0 73641 00:12:22.650 17:15:52 -- common/autotest_common.sh@941 -- # uname 00:12:22.650 17:15:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.650 17:15:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73641 00:12:22.650 killing process with pid 73641 00:12:22.650 17:15:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:22.650 17:15:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:22.650 17:15:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73641' 00:12:22.650 17:15:52 -- common/autotest_common.sh@955 -- # kill 73641 00:12:22.650 17:15:52 -- common/autotest_common.sh@960 -- # wait 73641 00:12:22.909 17:15:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:22.909 17:15:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:22.909 17:15:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:22.909 17:15:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.909 17:15:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.909 17:15:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.909 17:15:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.909 17:15:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.909 17:15:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:22.909 00:12:22.909 real 0m4.185s 00:12:22.909 user 0m5.193s 00:12:22.909 sys 0m1.282s 00:12:22.909 17:15:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.909 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.909 ************************************ 00:12:22.909 END TEST nvmf_fused_ordering 00:12:22.909 ************************************ 00:12:23.169 17:15:52 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:23.169 17:15:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:23.169 17:15:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.169 17:15:52 -- common/autotest_common.sh@10 -- # set +x 00:12:23.169 ************************************ 00:12:23.169 START TEST nvmf_delete_subsystem 00:12:23.169 ************************************ 00:12:23.169 17:15:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:23.169 * Looking for test storage... 00:12:23.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:23.169 17:15:53 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:23.169 17:15:53 -- nvmf/common.sh@7 -- # uname -s 00:12:23.169 17:15:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.169 17:15:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.169 17:15:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.169 17:15:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.169 17:15:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.169 17:15:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.169 17:15:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.169 17:15:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.169 17:15:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.169 17:15:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:23.169 17:15:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:23.169 17:15:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.169 17:15:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.169 17:15:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:23.169 17:15:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.169 17:15:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:23.169 17:15:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.169 17:15:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.169 17:15:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.169 17:15:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.169 17:15:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.169 17:15:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.169 17:15:53 -- paths/export.sh@5 -- # export PATH 00:12:23.169 17:15:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.169 17:15:53 -- nvmf/common.sh@47 -- # : 0 00:12:23.169 17:15:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.169 17:15:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.169 17:15:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.169 17:15:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.169 17:15:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.169 17:15:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.169 17:15:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.169 17:15:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.169 17:15:53 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:23.169 17:15:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:23.169 17:15:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.169 17:15:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:23.169 17:15:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:23.169 17:15:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:23.169 17:15:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.169 17:15:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.169 17:15:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.169 17:15:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:23.169 17:15:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:23.169 17:15:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.169 17:15:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.169 17:15:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:23.169 17:15:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:23.169 17:15:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:23.169 17:15:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:23.169 17:15:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:23.169 17:15:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.169 17:15:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:23.169 17:15:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:23.169 17:15:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:23.169 17:15:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:23.169 17:15:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:23.169 17:15:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:23.169 Cannot find device "nvmf_tgt_br" 00:12:23.169 17:15:53 -- nvmf/common.sh@155 -- # true 00:12:23.169 17:15:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:23.169 Cannot find device "nvmf_tgt_br2" 00:12:23.169 17:15:53 -- nvmf/common.sh@156 -- # true 00:12:23.169 17:15:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:23.169 17:15:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:23.169 Cannot find device "nvmf_tgt_br" 00:12:23.169 17:15:53 -- nvmf/common.sh@158 -- # true 00:12:23.169 17:15:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:23.169 Cannot find device "nvmf_tgt_br2" 00:12:23.169 17:15:53 -- nvmf/common.sh@159 -- # true 00:12:23.169 17:15:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:23.428 17:15:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:23.428 17:15:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:23.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.428 17:15:53 -- nvmf/common.sh@162 -- # true 00:12:23.428 17:15:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:23.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:23.428 17:15:53 -- nvmf/common.sh@163 -- # true 00:12:23.428 17:15:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:23.428 17:15:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:23.428 17:15:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:23.428 17:15:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:23.428 17:15:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.428 17:15:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.428 17:15:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.428 17:15:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:23.428 17:15:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:23.428 17:15:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:23.428 17:15:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:23.428 17:15:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:23.428 17:15:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:23.428 17:15:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.428 17:15:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.428 17:15:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.428 17:15:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:23.428 17:15:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:23.428 17:15:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.428 17:15:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.428 17:15:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.428 17:15:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.428 17:15:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.428 17:15:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:23.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:12:23.428 00:12:23.428 --- 10.0.0.2 ping statistics --- 00:12:23.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.428 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:23.428 17:15:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:23.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:23.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:23.428 00:12:23.428 --- 10.0.0.3 ping statistics --- 00:12:23.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.428 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:23.428 17:15:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:23.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:12:23.688 00:12:23.688 --- 10.0.0.1 ping statistics --- 00:12:23.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.688 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:23.688 17:15:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.688 17:15:53 -- nvmf/common.sh@422 -- # return 0 00:12:23.688 17:15:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:23.688 17:15:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.688 17:15:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:23.688 17:15:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:23.688 17:15:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.688 17:15:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:23.688 17:15:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:23.688 17:15:53 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:23.688 17:15:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:23.688 17:15:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:23.688 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 17:15:53 -- nvmf/common.sh@470 -- # nvmfpid=73913 00:12:23.688 17:15:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:23.688 17:15:53 -- nvmf/common.sh@471 -- # waitforlisten 73913 00:12:23.688 17:15:53 -- common/autotest_common.sh@817 -- # '[' -z 73913 ']' 00:12:23.688 17:15:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.688 17:15:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:23.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.688 17:15:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.688 17:15:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:23.688 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 [2024-04-25 17:15:53.498565] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:23.688 [2024-04-25 17:15:53.498689] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.688 [2024-04-25 17:15:53.639587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:23.976 [2024-04-25 17:15:53.699717] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.976 [2024-04-25 17:15:53.700026] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.976 [2024-04-25 17:15:53.700183] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.976 [2024-04-25 17:15:53.700240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.976 [2024-04-25 17:15:53.700270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.976 [2024-04-25 17:15:53.700524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.976 [2024-04-25 17:15:53.700533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.976 17:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:23.976 17:15:53 -- common/autotest_common.sh@850 -- # return 0 00:12:23.976 17:15:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:23.976 17:15:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 17:15:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 [2024-04-25 17:15:53.838567] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 [2024-04-25 17:15:53.862672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 NULL1 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 Delay0 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.976 17:15:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.976 17:15:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.976 17:15:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@28 -- # perf_pid=73945 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:23.976 17:15:53 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:24.245 [2024-04-25 17:15:54.060405] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:26.147 17:15:55 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.147 17:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.147 17:15:55 -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 starting I/O failed: -6 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Read completed with error (sct=0, sc=8) 00:12:26.147 Write completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 starting I/O failed: -6 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 [2024-04-25 17:15:56.097411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2c5000c3d0 is same with the state(5) to be set 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 [2024-04-25 17:15:56.097858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247ff00 is same with the state(5) to be set 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:26.148 Read completed with error (sct=0, sc=8) 00:12:26.148 Write completed with error (sct=0, sc=8) 00:12:27.525 [2024-04-25 17:15:57.074247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2480770 is same with the state(5) to be set 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 17:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 17:15:57 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 [2024-04-25 17:15:57.102301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2c5000c690 is same with the state(5) to be set 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 [2024-04-25 17:15:57.102936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2c5000bf90 is same with the state(5) to be set 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 [2024-04-25 17:15:57.103220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24610b0 is same with the state(5) to be set 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Read completed with error (sct=0, sc=8) 00:12:27.525 Write completed with error (sct=0, sc=8) 00:12:27.525 [2024-04-25 17:15:57.103430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247f3b0 is same with the state(5) to be set 00:12:27.525 17:15:57 -- target/delete_subsystem.sh@35 -- # kill -0 73945 00:12:27.525 17:15:57 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:27.525 [2024-04-25 17:15:57.104509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480770 (9): Bad file descriptor 00:12:27.525 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:27.525 Initializing NVMe Controllers 00:12:27.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.525 Controller IO queue size 128, less than required. 00:12:27.525 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.525 Initialization complete. Launching workers. 00:12:27.525 ======================================================== 00:12:27.525 Latency(us) 00:12:27.525 Device Information : IOPS MiB/s Average min max 00:12:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.30 0.08 891852.10 410.10 1017129.36 00:12:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.39 0.08 1003609.80 1603.35 2008508.35 00:12:27.525 ======================================================== 00:12:27.525 Total : 333.70 0.16 945568.97 410.10 2008508.35 00:12:27.525 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@35 -- # kill -0 73945 00:12:27.784 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (73945) - No such process 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@45 -- # NOT wait 73945 00:12:27.784 17:15:57 -- common/autotest_common.sh@638 -- # local es=0 00:12:27.784 17:15:57 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 73945 00:12:27.784 17:15:57 -- common/autotest_common.sh@626 -- # local arg=wait 00:12:27.784 17:15:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:27.784 17:15:57 -- common/autotest_common.sh@630 -- # type -t wait 00:12:27.784 17:15:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:27.784 17:15:57 -- common/autotest_common.sh@641 -- # wait 73945 00:12:27.784 17:15:57 -- common/autotest_common.sh@641 -- # es=1 00:12:27.784 17:15:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:27.784 17:15:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:27.784 17:15:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.784 17:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.784 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 17:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.784 17:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.784 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 [2024-04-25 17:15:57.625021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.784 17:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.784 17:15:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.784 17:15:57 -- common/autotest_common.sh@10 -- # set +x 00:12:27.784 17:15:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@54 -- # perf_pid=73995 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:27.784 17:15:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:28.043 [2024-04-25 17:15:57.803867] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:28.317 17:15:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:28.317 17:15:58 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:28.317 17:15:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:28.884 17:15:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:28.884 17:15:58 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:28.884 17:15:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:29.453 17:15:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:29.453 17:15:59 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:29.453 17:15:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:29.711 17:15:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:29.711 17:15:59 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:29.711 17:15:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:30.278 17:16:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:30.278 17:16:00 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:30.278 17:16:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:30.844 17:16:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:30.844 17:16:00 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:30.844 17:16:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:31.103 Initializing NVMe Controllers 00:12:31.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.103 Controller IO queue size 128, less than required. 00:12:31.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:31.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:31.103 Initialization complete. Launching workers. 00:12:31.103 ======================================================== 00:12:31.103 Latency(us) 00:12:31.103 Device Information : IOPS MiB/s Average min max 00:12:31.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003031.99 1000121.89 1042699.27 00:12:31.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004728.89 1000123.43 1012492.19 00:12:31.103 ======================================================== 00:12:31.103 Total : 256.00 0.12 1003880.44 1000121.89 1042699.27 00:12:31.103 00:12:31.361 17:16:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.361 17:16:01 -- target/delete_subsystem.sh@57 -- # kill -0 73995 00:12:31.361 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (73995) - No such process 00:12:31.361 17:16:01 -- target/delete_subsystem.sh@67 -- # wait 73995 00:12:31.361 17:16:01 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:31.361 17:16:01 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:31.361 17:16:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:31.361 17:16:01 -- nvmf/common.sh@117 -- # sync 00:12:31.361 17:16:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.361 17:16:01 -- nvmf/common.sh@120 -- # set +e 00:12:31.361 17:16:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.361 17:16:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.361 rmmod nvme_tcp 00:12:31.361 rmmod nvme_fabrics 00:12:31.361 rmmod nvme_keyring 00:12:31.361 17:16:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.361 17:16:01 -- nvmf/common.sh@124 -- # set -e 00:12:31.361 17:16:01 -- nvmf/common.sh@125 -- # return 0 00:12:31.361 17:16:01 -- nvmf/common.sh@478 -- # '[' -n 73913 ']' 00:12:31.361 17:16:01 -- nvmf/common.sh@479 -- # killprocess 73913 00:12:31.361 17:16:01 -- common/autotest_common.sh@936 -- # '[' -z 73913 ']' 00:12:31.361 17:16:01 -- common/autotest_common.sh@940 -- # kill -0 73913 00:12:31.361 17:16:01 -- common/autotest_common.sh@941 -- # uname 00:12:31.361 17:16:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.361 17:16:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73913 00:12:31.361 killing process with pid 73913 00:12:31.361 17:16:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.361 17:16:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.361 17:16:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73913' 00:12:31.361 17:16:01 -- common/autotest_common.sh@955 -- # kill 73913 00:12:31.361 17:16:01 -- common/autotest_common.sh@960 -- # wait 73913 00:12:31.619 17:16:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:31.619 17:16:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:31.619 17:16:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:31.619 17:16:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.619 17:16:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.619 17:16:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.619 17:16:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.619 17:16:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.619 17:16:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:31.619 ************************************ 00:12:31.619 END TEST nvmf_delete_subsystem 00:12:31.619 ************************************ 00:12:31.619 00:12:31.619 real 0m8.556s 00:12:31.619 user 0m27.086s 00:12:31.619 sys 0m1.484s 00:12:31.620 17:16:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.620 17:16:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.620 17:16:01 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.620 17:16:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.620 17:16:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.620 17:16:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.878 ************************************ 00:12:31.879 START TEST nvmf_ns_masking 00:12:31.879 ************************************ 00:12:31.879 17:16:01 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.879 * Looking for test storage... 00:12:31.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.879 17:16:01 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.879 17:16:01 -- nvmf/common.sh@7 -- # uname -s 00:12:31.879 17:16:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.879 17:16:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.879 17:16:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.879 17:16:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.879 17:16:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.879 17:16:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.879 17:16:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.879 17:16:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.879 17:16:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.879 17:16:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:31.879 17:16:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:31.879 17:16:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.879 17:16:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.879 17:16:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.879 17:16:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.879 17:16:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.879 17:16:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.879 17:16:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.879 17:16:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.879 17:16:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.879 17:16:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.879 17:16:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.879 17:16:01 -- paths/export.sh@5 -- # export PATH 00:12:31.879 17:16:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.879 17:16:01 -- nvmf/common.sh@47 -- # : 0 00:12:31.879 17:16:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.879 17:16:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.879 17:16:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.879 17:16:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.879 17:16:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.879 17:16:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.879 17:16:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.879 17:16:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.879 17:16:01 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.879 17:16:01 -- target/ns_masking.sh@11 -- # loops=5 00:12:31.879 17:16:01 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:31.879 17:16:01 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:31.879 17:16:01 -- target/ns_masking.sh@15 -- # uuidgen 00:12:31.879 17:16:01 -- target/ns_masking.sh@15 -- # HOSTID=58612b72-abb3-46c7-8e45-ab0d563c718d 00:12:31.879 17:16:01 -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:31.879 17:16:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.879 17:16:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.879 17:16:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.879 17:16:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.879 17:16:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.879 17:16:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.879 17:16:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.879 17:16:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.879 17:16:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:31.879 17:16:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:31.879 17:16:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.879 17:16:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.879 17:16:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.879 17:16:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.879 17:16:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.879 17:16:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.879 17:16:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.879 17:16:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.879 17:16:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.879 17:16:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.879 17:16:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.879 17:16:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.879 17:16:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.879 17:16:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.879 Cannot find device "nvmf_tgt_br" 00:12:31.879 17:16:01 -- nvmf/common.sh@155 -- # true 00:12:31.879 17:16:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.879 Cannot find device "nvmf_tgt_br2" 00:12:31.879 17:16:01 -- nvmf/common.sh@156 -- # true 00:12:31.879 17:16:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.879 17:16:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.879 Cannot find device "nvmf_tgt_br" 00:12:31.879 17:16:01 -- nvmf/common.sh@158 -- # true 00:12:31.879 17:16:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.879 Cannot find device "nvmf_tgt_br2" 00:12:31.879 17:16:01 -- nvmf/common.sh@159 -- # true 00:12:31.879 17:16:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:31.879 17:16:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:32.138 17:16:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.138 17:16:01 -- nvmf/common.sh@162 -- # true 00:12:32.138 17:16:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.138 17:16:01 -- nvmf/common.sh@163 -- # true 00:12:32.138 17:16:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.138 17:16:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.138 17:16:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.138 17:16:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.138 17:16:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.138 17:16:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.138 17:16:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.138 17:16:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.138 17:16:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.138 17:16:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:32.138 17:16:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:32.138 17:16:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:32.138 17:16:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:32.138 17:16:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.138 17:16:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.138 17:16:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.138 17:16:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:32.138 17:16:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:32.138 17:16:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.138 17:16:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.138 17:16:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.138 17:16:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.138 17:16:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.138 17:16:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:32.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:32.138 00:12:32.138 --- 10.0.0.2 ping statistics --- 00:12:32.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.138 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:32.138 17:16:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:32.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:12:32.138 00:12:32.138 --- 10.0.0.3 ping statistics --- 00:12:32.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.138 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:32.138 17:16:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:12:32.138 00:12:32.138 --- 10.0.0.1 ping statistics --- 00:12:32.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.138 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:12:32.138 17:16:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.138 17:16:02 -- nvmf/common.sh@422 -- # return 0 00:12:32.138 17:16:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:32.138 17:16:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.138 17:16:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:32.138 17:16:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:32.138 17:16:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.138 17:16:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:32.138 17:16:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:32.138 17:16:02 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:32.138 17:16:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:32.138 17:16:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:32.138 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.138 17:16:02 -- nvmf/common.sh@470 -- # nvmfpid=74236 00:12:32.138 17:16:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.138 17:16:02 -- nvmf/common.sh@471 -- # waitforlisten 74236 00:12:32.139 17:16:02 -- common/autotest_common.sh@817 -- # '[' -z 74236 ']' 00:12:32.139 17:16:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.139 17:16:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:32.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.139 17:16:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.139 17:16:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:32.139 17:16:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.397 [2024-04-25 17:16:02.140268] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:32.397 [2024-04-25 17:16:02.140350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.397 [2024-04-25 17:16:02.279536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.397 [2024-04-25 17:16:02.337558] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.397 [2024-04-25 17:16:02.337867] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.397 [2024-04-25 17:16:02.338029] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.397 [2024-04-25 17:16:02.338098] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.397 [2024-04-25 17:16:02.338205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.397 [2024-04-25 17:16:02.338368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.397 [2024-04-25 17:16:02.338459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.397 [2024-04-25 17:16:02.339149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.397 [2024-04-25 17:16:02.339210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.331 17:16:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.331 17:16:03 -- common/autotest_common.sh@850 -- # return 0 00:12:33.331 17:16:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:33.331 17:16:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:33.331 17:16:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 17:16:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.331 17:16:03 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:33.588 [2024-04-25 17:16:03.379910] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.588 17:16:03 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:33.588 17:16:03 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:33.588 17:16:03 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:33.871 Malloc1 00:12:33.871 17:16:03 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:34.129 Malloc2 00:12:34.129 17:16:03 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.387 17:16:04 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:34.646 17:16:04 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.646 [2024-04-25 17:16:04.596470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.646 17:16:04 -- target/ns_masking.sh@61 -- # connect 00:12:34.646 17:16:04 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58612b72-abb3-46c7-8e45-ab0d563c718d -a 10.0.0.2 -s 4420 -i 4 00:12:34.905 17:16:04 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.905 17:16:04 -- common/autotest_common.sh@1184 -- # local i=0 00:12:34.905 17:16:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.905 17:16:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:34.905 17:16:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:36.807 17:16:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:36.807 17:16:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.807 17:16:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:36.807 17:16:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:36.807 17:16:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.807 17:16:06 -- common/autotest_common.sh@1194 -- # return 0 00:12:36.807 17:16:06 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:36.807 17:16:06 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:37.065 17:16:06 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:37.065 17:16:06 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:37.065 17:16:06 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:37.065 17:16:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.065 17:16:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:37.065 [ 0]:0x1 00:12:37.065 17:16:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.065 17:16:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.065 17:16:06 -- target/ns_masking.sh@40 -- # nguid=7c4ef22437444b2b95a163c6f042a81a 00:12:37.065 17:16:06 -- target/ns_masking.sh@41 -- # [[ 7c4ef22437444b2b95a163c6f042a81a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.065 17:16:06 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:37.323 17:16:07 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:37.323 17:16:07 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:37.323 17:16:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.323 [ 0]:0x1 00:12:37.323 17:16:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.323 17:16:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.323 17:16:07 -- target/ns_masking.sh@40 -- # nguid=7c4ef22437444b2b95a163c6f042a81a 00:12:37.323 17:16:07 -- target/ns_masking.sh@41 -- # [[ 7c4ef22437444b2b95a163c6f042a81a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.323 17:16:07 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:37.323 17:16:07 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.323 17:16:07 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:37.323 [ 1]:0x2 00:12:37.323 17:16:07 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:37.324 17:16:07 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.324 17:16:07 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:37.324 17:16:07 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.324 17:16:07 -- target/ns_masking.sh@69 -- # disconnect 00:12:37.324 17:16:07 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.581 17:16:07 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.840 17:16:07 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:37.840 17:16:07 -- target/ns_masking.sh@77 -- # connect 1 00:12:37.840 17:16:07 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58612b72-abb3-46c7-8e45-ab0d563c718d -a 10.0.0.2 -s 4420 -i 4 00:12:38.098 17:16:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:38.098 17:16:07 -- common/autotest_common.sh@1184 -- # local i=0 00:12:38.098 17:16:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.098 17:16:07 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:12:38.098 17:16:07 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:12:38.098 17:16:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:40.003 17:16:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:40.003 17:16:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:40.003 17:16:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.003 17:16:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:40.003 17:16:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.003 17:16:09 -- common/autotest_common.sh@1194 -- # return 0 00:12:40.003 17:16:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:40.003 17:16:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:40.262 17:16:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:40.262 17:16:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:40.262 17:16:09 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:40.262 17:16:09 -- common/autotest_common.sh@638 -- # local es=0 00:12:40.262 17:16:09 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.262 17:16:09 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:40.262 17:16:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.262 17:16:09 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:40.262 17:16:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.262 17:16:09 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:40.262 17:16:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.262 17:16:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:40.262 17:16:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.262 17:16:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.262 17:16:10 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:40.262 17:16:10 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.262 17:16:10 -- common/autotest_common.sh@641 -- # es=1 00:12:40.262 17:16:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:40.262 17:16:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:40.262 17:16:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:40.262 17:16:10 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:40.262 17:16:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.262 17:16:10 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:40.262 [ 0]:0x2 00:12:40.262 17:16:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.262 17:16:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.262 17:16:10 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:40.262 17:16:10 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.262 17:16:10 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.521 17:16:10 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:40.521 17:16:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.521 17:16:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:40.521 [ 0]:0x1 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # nguid=7c4ef22437444b2b95a163c6f042a81a 00:12:40.521 17:16:10 -- target/ns_masking.sh@41 -- # [[ 7c4ef22437444b2b95a163c6f042a81a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.521 17:16:10 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:40.521 17:16:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.521 17:16:10 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:40.521 [ 1]:0x2 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.521 17:16:10 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:40.521 17:16:10 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.521 17:16:10 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.780 17:16:10 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:40.780 17:16:10 -- common/autotest_common.sh@638 -- # local es=0 00:12:40.780 17:16:10 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.780 17:16:10 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:40.780 17:16:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.780 17:16:10 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:40.780 17:16:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.780 17:16:10 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:40.780 17:16:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.780 17:16:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:40.780 17:16:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.780 17:16:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.780 17:16:10 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:40.780 17:16:10 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.780 17:16:10 -- common/autotest_common.sh@641 -- # es=1 00:12:40.780 17:16:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:40.780 17:16:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:40.780 17:16:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:40.780 17:16:10 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:40.780 17:16:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.780 17:16:10 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:40.780 [ 0]:0x2 00:12:40.780 17:16:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.780 17:16:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.038 17:16:10 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:41.038 17:16:10 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.038 17:16:10 -- target/ns_masking.sh@91 -- # disconnect 00:12:41.038 17:16:10 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.038 17:16:10 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:41.296 17:16:11 -- target/ns_masking.sh@95 -- # connect 2 00:12:41.297 17:16:11 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 58612b72-abb3-46c7-8e45-ab0d563c718d -a 10.0.0.2 -s 4420 -i 4 00:12:41.297 17:16:11 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:41.297 17:16:11 -- common/autotest_common.sh@1184 -- # local i=0 00:12:41.297 17:16:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.297 17:16:11 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:41.297 17:16:11 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:41.297 17:16:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:43.259 17:16:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:43.259 17:16:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:43.526 17:16:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.526 17:16:13 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:43.526 17:16:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.526 17:16:13 -- common/autotest_common.sh@1194 -- # return 0 00:12:43.526 17:16:13 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.526 17:16:13 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:43.526 17:16:13 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:43.526 17:16:13 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:43.526 17:16:13 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:43.526 17:16:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.526 17:16:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:43.526 [ 0]:0x1 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # nguid=7c4ef22437444b2b95a163c6f042a81a 00:12:43.526 17:16:13 -- target/ns_masking.sh@41 -- # [[ 7c4ef22437444b2b95a163c6f042a81a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.526 17:16:13 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:43.526 17:16:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:43.526 17:16:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.526 [ 1]:0x2 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.526 17:16:13 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:43.526 17:16:13 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.526 17:16:13 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.785 17:16:13 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:43.785 17:16:13 -- common/autotest_common.sh@638 -- # local es=0 00:12:43.785 17:16:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.785 17:16:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:43.785 17:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.785 17:16:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:43.785 17:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:43.785 17:16:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:43.785 17:16:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.785 17:16:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:43.785 17:16:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.785 17:16:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:43.785 17:16:13 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:43.785 17:16:13 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.785 17:16:13 -- common/autotest_common.sh@641 -- # es=1 00:12:43.785 17:16:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:43.785 17:16:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:43.785 17:16:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:43.785 17:16:13 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:43.785 17:16:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:43.785 17:16:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:43.785 [ 0]:0x2 00:12:43.785 17:16:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.785 17:16:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.043 17:16:13 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:44.043 17:16:13 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.043 17:16:13 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:44.043 17:16:13 -- common/autotest_common.sh@638 -- # local es=0 00:12:44.043 17:16:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:44.043 17:16:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.043 17:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.043 17:16:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.043 17:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.043 17:16:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.043 17:16:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.043 17:16:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:44.043 17:16:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:44.043 17:16:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:44.301 [2024-04-25 17:16:14.054183] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:44.301 2024/04/25 17:16:14 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:44.301 request: 00:12:44.301 { 00:12:44.301 "method": "nvmf_ns_remove_host", 00:12:44.301 "params": { 00:12:44.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.301 "nsid": 2, 00:12:44.301 "host": "nqn.2016-06.io.spdk:host1" 00:12:44.301 } 00:12:44.301 } 00:12:44.301 Got JSON-RPC error response 00:12:44.301 GoRPCClient: error on JSON-RPC call 00:12:44.301 17:16:14 -- common/autotest_common.sh@641 -- # es=1 00:12:44.301 17:16:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:44.301 17:16:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:44.301 17:16:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:44.301 17:16:14 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:44.301 17:16:14 -- common/autotest_common.sh@638 -- # local es=0 00:12:44.301 17:16:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.301 17:16:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:44.301 17:16:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.301 17:16:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:44.301 17:16:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.301 17:16:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:44.301 17:16:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:44.301 17:16:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:44.301 17:16:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.301 17:16:14 -- common/autotest_common.sh@641 -- # es=1 00:12:44.301 17:16:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:44.301 17:16:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:44.301 17:16:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:44.301 17:16:14 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:44.301 17:16:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.301 17:16:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:44.301 [ 0]:0x2 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.301 17:16:14 -- target/ns_masking.sh@40 -- # nguid=80a10fb1f98d4e1e9365ab72aa78dd52 00:12:44.301 17:16:14 -- target/ns_masking.sh@41 -- # [[ 80a10fb1f98d4e1e9365ab72aa78dd52 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.301 17:16:14 -- target/ns_masking.sh@108 -- # disconnect 00:12:44.301 17:16:14 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.301 17:16:14 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.558 17:16:14 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:44.558 17:16:14 -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:44.558 17:16:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:44.558 17:16:14 -- nvmf/common.sh@117 -- # sync 00:12:44.558 17:16:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.558 17:16:14 -- nvmf/common.sh@120 -- # set +e 00:12:44.558 17:16:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.558 17:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.558 rmmod nvme_tcp 00:12:44.816 rmmod nvme_fabrics 00:12:44.816 rmmod nvme_keyring 00:12:44.816 17:16:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.816 17:16:14 -- nvmf/common.sh@124 -- # set -e 00:12:44.816 17:16:14 -- nvmf/common.sh@125 -- # return 0 00:12:44.816 17:16:14 -- nvmf/common.sh@478 -- # '[' -n 74236 ']' 00:12:44.816 17:16:14 -- nvmf/common.sh@479 -- # killprocess 74236 00:12:44.816 17:16:14 -- common/autotest_common.sh@936 -- # '[' -z 74236 ']' 00:12:44.816 17:16:14 -- common/autotest_common.sh@940 -- # kill -0 74236 00:12:44.816 17:16:14 -- common/autotest_common.sh@941 -- # uname 00:12:44.816 17:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.816 17:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74236 00:12:44.816 killing process with pid 74236 00:12:44.816 17:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:44.816 17:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:44.816 17:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74236' 00:12:44.816 17:16:14 -- common/autotest_common.sh@955 -- # kill 74236 00:12:44.816 17:16:14 -- common/autotest_common.sh@960 -- # wait 74236 00:12:45.075 17:16:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:45.075 17:16:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:45.075 17:16:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:45.075 17:16:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.075 17:16:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.075 17:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.075 17:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.075 17:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.075 17:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:45.075 00:12:45.075 real 0m13.216s 00:12:45.075 user 0m52.949s 00:12:45.075 sys 0m2.243s 00:12:45.075 17:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.075 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:45.075 ************************************ 00:12:45.075 END TEST nvmf_ns_masking 00:12:45.075 ************************************ 00:12:45.075 17:16:14 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:12:45.075 17:16:14 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:45.075 17:16:14 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:45.075 17:16:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.075 17:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.075 17:16:14 -- common/autotest_common.sh@10 -- # set +x 00:12:45.075 ************************************ 00:12:45.075 START TEST nvmf_vfio_user 00:12:45.075 ************************************ 00:12:45.075 17:16:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:45.075 * Looking for test storage... 00:12:45.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:45.333 17:16:15 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.333 17:16:15 -- nvmf/common.sh@7 -- # uname -s 00:12:45.333 17:16:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.333 17:16:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.333 17:16:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.333 17:16:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.333 17:16:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.333 17:16:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.333 17:16:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.333 17:16:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.333 17:16:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.333 17:16:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.333 17:16:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:45.333 17:16:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:12:45.333 17:16:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.333 17:16:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.333 17:16:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.333 17:16:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.333 17:16:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.333 17:16:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.333 17:16:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.333 17:16:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.334 17:16:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.334 17:16:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.334 17:16:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.334 17:16:15 -- paths/export.sh@5 -- # export PATH 00:12:45.334 17:16:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.334 17:16:15 -- nvmf/common.sh@47 -- # : 0 00:12:45.334 17:16:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.334 17:16:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.334 17:16:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.334 17:16:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.334 17:16:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.334 17:16:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.334 17:16:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.334 17:16:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=74697 00:12:45.334 Process pid: 74697 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 74697' 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:45.334 17:16:15 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 74697 00:12:45.334 17:16:15 -- common/autotest_common.sh@817 -- # '[' -z 74697 ']' 00:12:45.334 17:16:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.334 17:16:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:45.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.334 17:16:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.334 17:16:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:45.334 17:16:15 -- common/autotest_common.sh@10 -- # set +x 00:12:45.334 [2024-04-25 17:16:15.136889] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:45.334 [2024-04-25 17:16:15.136977] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.334 [2024-04-25 17:16:15.272434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.592 [2024-04-25 17:16:15.334243] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.592 [2024-04-25 17:16:15.334471] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.592 [2024-04-25 17:16:15.334561] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.592 [2024-04-25 17:16:15.334642] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.592 [2024-04-25 17:16:15.334728] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.592 [2024-04-25 17:16:15.334887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.592 [2024-04-25 17:16:15.334993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.592 [2024-04-25 17:16:15.335427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.592 [2024-04-25 17:16:15.335442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.159 17:16:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:46.159 17:16:16 -- common/autotest_common.sh@850 -- # return 0 00:12:46.159 17:16:16 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:47.533 17:16:17 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:47.791 Malloc1 00:12:47.791 17:16:17 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:48.050 17:16:17 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:48.308 17:16:18 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:48.567 17:16:18 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:48.567 17:16:18 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:48.567 17:16:18 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:48.826 Malloc2 00:12:48.826 17:16:18 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:49.085 17:16:18 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:49.343 17:16:19 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:49.603 17:16:19 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:49.603 [2024-04-25 17:16:19.408774] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:12:49.603 [2024-04-25 17:16:19.408816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74833 ] 00:12:49.603 [2024-04-25 17:16:19.545887] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:49.603 [2024-04-25 17:16:19.555077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:49.603 [2024-04-25 17:16:19.555111] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd4e5dba000 00:12:49.603 [2024-04-25 17:16:19.556072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.557061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.558062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.559062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.560063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.561068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.562067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.563087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:49.603 [2024-04-25 17:16:19.564080] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:49.603 [2024-04-25 17:16:19.564104] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd4e5daf000 00:12:49.603 [2024-04-25 17:16:19.565383] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.863 [2024-04-25 17:16:19.582807] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:49.864 [2024-04-25 17:16:19.582843] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:49.864 [2024-04-25 17:16:19.585143] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:49.864 [2024-04-25 17:16:19.585198] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:49.864 [2024-04-25 17:16:19.585275] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:49.864 [2024-04-25 17:16:19.585297] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:49.864 [2024-04-25 17:16:19.585303] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:49.864 [2024-04-25 17:16:19.586134] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:49.864 [2024-04-25 17:16:19.586156] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:49.864 [2024-04-25 17:16:19.586167] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:49.864 [2024-04-25 17:16:19.587133] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:49.864 [2024-04-25 17:16:19.587156] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:49.864 [2024-04-25 17:16:19.587167] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.588166] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:49.864 [2024-04-25 17:16:19.588194] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.589144] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:49.864 [2024-04-25 17:16:19.589166] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:49.864 [2024-04-25 17:16:19.589173] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.589182] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.589288] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:49.864 [2024-04-25 17:16:19.589293] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.589299] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:49.864 [2024-04-25 17:16:19.593760] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:49.864 [2024-04-25 17:16:19.594171] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:49.864 [2024-04-25 17:16:19.595175] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:49.864 [2024-04-25 17:16:19.596167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.864 [2024-04-25 17:16:19.596261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:49.864 [2024-04-25 17:16:19.597194] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:49.864 [2024-04-25 17:16:19.597215] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:49.864 [2024-04-25 17:16:19.597222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597243] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:49.864 [2024-04-25 17:16:19.597259] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597279] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.864 [2024-04-25 17:16:19.597285] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.864 [2024-04-25 17:16:19.597299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.864 [2024-04-25 17:16:19.597382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:49.864 [2024-04-25 17:16:19.597393] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:49.864 [2024-04-25 17:16:19.597398] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:49.864 [2024-04-25 17:16:19.597403] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:49.864 [2024-04-25 17:16:19.597407] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:49.864 [2024-04-25 17:16:19.597412] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:49.864 [2024-04-25 17:16:19.597417] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:49.864 [2024-04-25 17:16:19.597422] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597430] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:49.864 [2024-04-25 17:16:19.597453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:49.864 [2024-04-25 17:16:19.597466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.864 [2024-04-25 17:16:19.597475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.864 [2024-04-25 17:16:19.597483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.864 [2024-04-25 17:16:19.597490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.864 [2024-04-25 17:16:19.597495] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597505] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:49.864 [2024-04-25 17:16:19.597525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:49.864 [2024-04-25 17:16:19.597531] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:49.864 [2024-04-25 17:16:19.597537] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597547] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597553] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.864 [2024-04-25 17:16:19.597572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:49.864 [2024-04-25 17:16:19.597619] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597629] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:49.864 [2024-04-25 17:16:19.597637] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:49.864 [2024-04-25 17:16:19.597642] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:49.864 [2024-04-25 17:16:19.597648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:49.864 [2024-04-25 17:16:19.597661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597671] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:49.865 [2024-04-25 17:16:19.597682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597691] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597698] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.865 [2024-04-25 17:16:19.597703] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.865 [2024-04-25 17:16:19.597709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597794] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.865 [2024-04-25 17:16:19.597798] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.865 [2024-04-25 17:16:19.597805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597830] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597839] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597848] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597865] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:49.865 [2024-04-25 17:16:19.597870] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:49.865 [2024-04-25 17:16:19.597875] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:49.865 [2024-04-25 17:16:19.597895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.597988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.597999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.598013] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:49.865 [2024-04-25 17:16:19.598018] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:49.865 [2024-04-25 17:16:19.598022] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:49.865 [2024-04-25 17:16:19.598026] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:49.865 [2024-04-25 17:16:19.598032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:49.865 [2024-04-25 17:16:19.598040] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:49.865 [2024-04-25 17:16:19.598045] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:49.865 [2024-04-25 17:16:19.598051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.598058] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:49.865 [2024-04-25 17:16:19.598063] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.865 [2024-04-25 17:16:19.598069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.598091] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:49.865 [2024-04-25 17:16:19.598096] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:49.865 [2024-04-25 17:16:19.598102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:49.865 [2024-04-25 17:16:19.598109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.598124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.598135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:49.865 [2024-04-25 17:16:19.598143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:49.865 ===================================================== 00:12:49.865 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:49.865 ===================================================== 00:12:49.865 Controller Capabilities/Features 00:12:49.865 ================================ 00:12:49.865 Vendor ID: 4e58 00:12:49.865 Subsystem Vendor ID: 4e58 00:12:49.865 Serial Number: SPDK1 00:12:49.865 Model Number: SPDK bdev Controller 00:12:49.865 Firmware Version: 24.05 00:12:49.865 Recommended Arb Burst: 6 00:12:49.865 IEEE OUI Identifier: 8d 6b 50 00:12:49.865 Multi-path I/O 00:12:49.865 May have multiple subsystem ports: Yes 00:12:49.865 May have multiple controllers: Yes 00:12:49.865 Associated with SR-IOV VF: No 00:12:49.865 Max Data Transfer Size: 131072 00:12:49.865 Max Number of Namespaces: 32 00:12:49.865 Max Number of I/O Queues: 127 00:12:49.865 NVMe Specification Version (VS): 1.3 00:12:49.865 NVMe Specification Version (Identify): 1.3 00:12:49.865 Maximum Queue Entries: 256 00:12:49.865 Contiguous Queues Required: Yes 00:12:49.865 Arbitration Mechanisms Supported 00:12:49.865 Weighted Round Robin: Not Supported 00:12:49.865 Vendor Specific: Not Supported 00:12:49.865 Reset Timeout: 15000 ms 00:12:49.865 Doorbell Stride: 4 bytes 00:12:49.865 NVM Subsystem Reset: Not Supported 00:12:49.865 Command Sets Supported 00:12:49.865 NVM Command Set: Supported 00:12:49.865 Boot Partition: Not Supported 00:12:49.865 Memory Page Size Minimum: 4096 bytes 00:12:49.865 Memory Page Size Maximum: 4096 bytes 00:12:49.865 Persistent Memory Region: Not Supported 00:12:49.865 Optional Asynchronous Events Supported 00:12:49.865 Namespace Attribute Notices: Supported 00:12:49.865 Firmware Activation Notices: Not Supported 00:12:49.865 ANA Change Notices: Not Supported 00:12:49.865 PLE Aggregate Log Change Notices: Not Supported 00:12:49.865 LBA Status Info Alert Notices: Not Supported 00:12:49.865 EGE Aggregate Log Change Notices: Not Supported 00:12:49.865 Normal NVM Subsystem Shutdown event: Not Supported 00:12:49.865 Zone Descriptor Change Notices: Not Supported 00:12:49.865 Discovery Log Change Notices: Not Supported 00:12:49.865 Controller Attributes 00:12:49.865 128-bit Host Identifier: Supported 00:12:49.865 Non-Operational Permissive Mode: Not Supported 00:12:49.865 NVM Sets: Not Supported 00:12:49.866 Read Recovery Levels: Not Supported 00:12:49.866 Endurance Groups: Not Supported 00:12:49.866 Predictable Latency Mode: Not Supported 00:12:49.866 Traffic Based Keep ALive: Not Supported 00:12:49.866 Namespace Granularity: Not Supported 00:12:49.866 SQ Associations: Not Supported 00:12:49.866 UUID List: Not Supported 00:12:49.866 Multi-Domain Subsystem: Not Supported 00:12:49.866 Fixed Capacity Management: Not Supported 00:12:49.866 Variable Capacity Management: Not Supported 00:12:49.866 Delete Endurance Group: Not Supported 00:12:49.866 Delete NVM Set: Not Supported 00:12:49.866 Extended LBA Formats Supported: Not Supported 00:12:49.866 Flexible Data Placement Supported: Not Supported 00:12:49.866 00:12:49.866 Controller Memory Buffer Support 00:12:49.866 ================================ 00:12:49.866 Supported: No 00:12:49.866 00:12:49.866 Persistent Memory Region Support 00:12:49.866 ================================ 00:12:49.866 Supported: No 00:12:49.866 00:12:49.866 Admin Command Set Attributes 00:12:49.866 ============================ 00:12:49.866 Security Send/Receive: Not Supported 00:12:49.866 Format NVM: Not Supported 00:12:49.866 Firmware Activate/Download: Not Supported 00:12:49.866 Namespace Management: Not Supported 00:12:49.866 Device Self-Test: Not Supported 00:12:49.866 Directives: Not Supported 00:12:49.866 NVMe-MI: Not Supported 00:12:49.866 Virtualization Management: Not Supported 00:12:49.866 Doorbell Buffer Config: Not Supported 00:12:49.866 Get LBA Status Capability: Not Supported 00:12:49.866 Command & Feature Lockdown Capability: Not Supported 00:12:49.866 Abort Command Limit: 4 00:12:49.866 Async Event Request Limit: 4 00:12:49.866 Number of Firmware Slots: N/A 00:12:49.866 Firmware Slot 1 Read-Only: N/A 00:12:49.866 Firmware Activation Without Reset: N/A 00:12:49.866 Multiple Update Detection Support: N/A 00:12:49.866 Firmware Update Granularity: No Information Provided 00:12:49.866 Per-Namespace SMART Log: No 00:12:49.866 Asymmetric Namespace Access Log Page: Not Supported 00:12:49.866 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:49.866 Command Effects Log Page: Supported 00:12:49.866 Get Log Page Extended Data: Supported 00:12:49.866 Telemetry Log Pages: Not Supported 00:12:49.866 Persistent Event Log Pages: Not Supported 00:12:49.866 Supported Log Pages Log Page: May Support 00:12:49.866 Commands Supported & Effects Log Page: Not Supported 00:12:49.866 Feature Identifiers & Effects Log Page:May Support 00:12:49.866 NVMe-MI Commands & Effects Log Page: May Support 00:12:49.866 Data Area 4 for Telemetry Log: Not Supported 00:12:49.866 Error Log Page Entries Supported: 128 00:12:49.866 Keep Alive: Supported 00:12:49.866 Keep Alive Granularity: 10000 ms 00:12:49.866 00:12:49.866 NVM Command Set Attributes 00:12:49.866 ========================== 00:12:49.866 Submission Queue Entry Size 00:12:49.866 Max: 64 00:12:49.866 Min: 64 00:12:49.866 Completion Queue Entry Size 00:12:49.866 Max: 16 00:12:49.866 Min: 16 00:12:49.866 Number of Namespaces: 32 00:12:49.866 Compare Command: Supported 00:12:49.866 Write Uncorrectable Command: Not Supported 00:12:49.866 Dataset Management Command: Supported 00:12:49.866 Write Zeroes Command: Supported 00:12:49.866 Set Features Save Field: Not Supported 00:12:49.866 Reservations: Not Supported 00:12:49.866 Timestamp: Not Supported 00:12:49.866 Copy: Supported 00:12:49.866 Volatile Write Cache: Present 00:12:49.866 Atomic Write Unit (Normal): 1 00:12:49.866 Atomic Write Unit (PFail): 1 00:12:49.866 Atomic Compare & Write Unit: 1 00:12:49.866 Fused Compare & Write: Supported 00:12:49.866 Scatter-Gather List 00:12:49.866 SGL Command Set: Supported (Dword aligned) 00:12:49.866 SGL Keyed: Not Supported 00:12:49.866 SGL Bit Bucket Descriptor: Not Supported 00:12:49.866 SGL Metadata Pointer: Not Supported 00:12:49.866 Oversized SGL: Not Supported 00:12:49.866 SGL Metadata Address: Not Supported 00:12:49.866 SGL Offset: Not Supported 00:12:49.866 Transport SGL Data Block: Not Supported 00:12:49.866 Replay Protected Memory Block: Not Supported 00:12:49.866 00:12:49.866 Firmware Slot Information 00:12:49.866 ========================= 00:12:49.866 Active slot: 1 00:12:49.866 Slot 1 Firmware Revision: 24.05 00:12:49.866 00:12:49.866 00:12:49.866 Commands Supported and Effects 00:12:49.866 ============================== 00:12:49.866 Admin Commands 00:12:49.866 -------------- 00:12:49.866 Get Log Page (02h): Supported 00:12:49.866 Identify (06h): Supported 00:12:49.866 Abort (08h): Supported 00:12:49.866 Set Features (09h): Supported 00:12:49.866 Get Features (0Ah): Supported 00:12:49.866 Asynchronous Event Request (0Ch): Supported 00:12:49.866 Keep Alive (18h): Supported 00:12:49.866 I/O Commands 00:12:49.866 ------------ 00:12:49.866 Flush (00h): Supported LBA-Change 00:12:49.866 Write (01h): Supported LBA-Change 00:12:49.866 Read (02h): Supported 00:12:49.866 Compare (05h): Supported 00:12:49.866 Write Zeroes (08h): Supported LBA-Change 00:12:49.866 Dataset Management (09h): Supported LBA-Change 00:12:49.866 Copy (19h): Supported LBA-Change 00:12:49.866 Unknown (79h): Supported LBA-Change 00:12:49.866 Unknown (7Ah): Supported 00:12:49.866 00:12:49.866 Error Log 00:12:49.866 ========= 00:12:49.866 00:12:49.866 Arbitration 00:12:49.866 =========== 00:12:49.866 Arbitration Burst: 1 00:12:49.866 00:12:49.866 Power Management 00:12:49.866 ================ 00:12:49.866 Number of Power States: 1 00:12:49.866 Current Power State: Power State #0 00:12:49.866 Power State #0: 00:12:49.866 Max Power: 0.00 W 00:12:49.866 Non-Operational State: Operational 00:12:49.866 Entry Latency: Not Reported 00:12:49.866 Exit Latency: Not Reported 00:12:49.866 Relative Read Throughput: 0 00:12:49.866 Relative Read Latency: 0 00:12:49.866 Relative Write Throughput: 0 00:12:49.866 Relative Write Latency: 0 00:12:49.866 Idle Power: Not Reported 00:12:49.866 Active Power: Not Reported 00:12:49.866 Non-Operational Permissive Mode: Not Supported 00:12:49.866 00:12:49.866 Health Information 00:12:49.866 ================== 00:12:49.866 Critical Warnings: 00:12:49.866 Available Spare Space: OK 00:12:49.866 Temperature: OK 00:12:49.866 Device Reliability: OK 00:12:49.867 Read Only: No 00:12:49.867 Volatile Memory Backup: OK 00:12:49.867 Current Temperature: 0 Kelvin (-2[2024-04-25 17:16:19.598274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:49.867 [2024-04-25 17:16:19.598286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:49.867 [2024-04-25 17:16:19.598332] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:49.867 [2024-04-25 17:16:19.598349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.867 [2024-04-25 17:16:19.598356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.867 [2024-04-25 17:16:19.598363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.867 [2024-04-25 17:16:19.598369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.867 [2024-04-25 17:16:19.599185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:49.867 [2024-04-25 17:16:19.599213] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:49.867 [2024-04-25 17:16:19.600186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:49.867 [2024-04-25 17:16:19.600269] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:49.867 [2024-04-25 17:16:19.600280] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:49.867 [2024-04-25 17:16:19.601189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:49.867 [2024-04-25 17:16:19.601216] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:49.867 [2024-04-25 17:16:19.601273] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:49.867 [2024-04-25 17:16:19.603241] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.867 73 Celsius) 00:12:49.867 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:49.867 Available Spare: 0% 00:12:49.867 Available Spare Threshold: 0% 00:12:49.867 Life Percentage Used: 0% 00:12:49.867 Data Units Read: 0 00:12:49.867 Data Units Written: 0 00:12:49.867 Host Read Commands: 0 00:12:49.867 Host Write Commands: 0 00:12:49.867 Controller Busy Time: 0 minutes 00:12:49.867 Power Cycles: 0 00:12:49.867 Power On Hours: 0 hours 00:12:49.867 Unsafe Shutdowns: 0 00:12:49.867 Unrecoverable Media Errors: 0 00:12:49.867 Lifetime Error Log Entries: 0 00:12:49.867 Warning Temperature Time: 0 minutes 00:12:49.867 Critical Temperature Time: 0 minutes 00:12:49.867 00:12:49.867 Number of Queues 00:12:49.867 ================ 00:12:49.867 Number of I/O Submission Queues: 127 00:12:49.867 Number of I/O Completion Queues: 127 00:12:49.867 00:12:49.867 Active Namespaces 00:12:49.867 ================= 00:12:49.867 Namespace ID:1 00:12:49.867 Error Recovery Timeout: Unlimited 00:12:49.867 Command Set Identifier: NVM (00h) 00:12:49.867 Deallocate: Supported 00:12:49.867 Deallocated/Unwritten Error: Not Supported 00:12:49.867 Deallocated Read Value: Unknown 00:12:49.867 Deallocate in Write Zeroes: Not Supported 00:12:49.867 Deallocated Guard Field: 0xFFFF 00:12:49.867 Flush: Supported 00:12:49.867 Reservation: Supported 00:12:49.867 Namespace Sharing Capabilities: Multiple Controllers 00:12:49.867 Size (in LBAs): 131072 (0GiB) 00:12:49.867 Capacity (in LBAs): 131072 (0GiB) 00:12:49.867 Utilization (in LBAs): 131072 (0GiB) 00:12:49.867 NGUID: EBC17231AB2C4D3D86F978C3D2A1D4A4 00:12:49.867 UUID: ebc17231-ab2c-4d3d-86f9-78c3d2a1d4a4 00:12:49.867 Thin Provisioning: Not Supported 00:12:49.867 Per-NS Atomic Units: Yes 00:12:49.867 Atomic Boundary Size (Normal): 0 00:12:49.867 Atomic Boundary Size (PFail): 0 00:12:49.867 Atomic Boundary Offset: 0 00:12:49.867 Maximum Single Source Range Length: 65535 00:12:49.867 Maximum Copy Length: 65535 00:12:49.867 Maximum Source Range Count: 1 00:12:49.867 NGUID/EUI64 Never Reused: No 00:12:49.867 Namespace Write Protected: No 00:12:49.867 Number of LBA Formats: 1 00:12:49.867 Current LBA Format: LBA Format #00 00:12:49.867 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:49.867 00:12:49.867 17:16:19 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:50.125 [2024-04-25 17:16:19.900257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:55.417 [2024-04-25 17:16:24.907479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:55.417 Initializing NVMe Controllers 00:12:55.417 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:55.417 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:55.417 Initialization complete. Launching workers. 00:12:55.417 ======================================================== 00:12:55.417 Latency(us) 00:12:55.417 Device Information : IOPS MiB/s Average min max 00:12:55.417 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32062.67 125.24 3991.58 1072.16 11377.00 00:12:55.417 ======================================================== 00:12:55.417 Total : 32062.67 125.24 3991.58 1072.16 11377.00 00:12:55.417 00:12:55.417 17:16:24 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:55.417 [2024-04-25 17:16:25.205070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:00.691 [2024-04-25 17:16:30.235084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:00.691 Initializing NVMe Controllers 00:13:00.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:00.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:00.691 Initialization complete. Launching workers. 00:13:00.691 ======================================================== 00:13:00.691 Latency(us) 00:13:00.691 Device Information : IOPS MiB/s Average min max 00:13:00.691 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16103.80 62.91 7957.40 5956.37 14582.25 00:13:00.691 ======================================================== 00:13:00.691 Total : 16103.80 62.91 7957.40 5956.37 14582.25 00:13:00.691 00:13:00.691 17:16:30 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:00.691 [2024-04-25 17:16:30.510666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.010 [2024-04-25 17:16:35.577070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.010 Initializing NVMe Controllers 00:13:06.010 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.010 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:06.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:06.010 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:06.010 Initialization complete. Launching workers. 00:13:06.010 Starting thread on core 2 00:13:06.010 Starting thread on core 3 00:13:06.010 Starting thread on core 1 00:13:06.010 17:16:35 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:06.010 [2024-04-25 17:16:35.893932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:09.296 [2024-04-25 17:16:38.942934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:09.296 Initializing NVMe Controllers 00:13:09.296 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.296 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:09.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:09.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:09.296 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:09.296 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:09.296 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:09.296 Initialization complete. Launching workers. 00:13:09.296 Starting thread on core 1 with urgent priority queue 00:13:09.296 Starting thread on core 2 with urgent priority queue 00:13:09.296 Starting thread on core 3 with urgent priority queue 00:13:09.296 Starting thread on core 0 with urgent priority queue 00:13:09.296 SPDK bdev Controller (SPDK1 ) core 0: 8292.67 IO/s 12.06 secs/100000 ios 00:13:09.296 SPDK bdev Controller (SPDK1 ) core 1: 7410.33 IO/s 13.49 secs/100000 ios 00:13:09.296 SPDK bdev Controller (SPDK1 ) core 2: 8522.33 IO/s 11.73 secs/100000 ios 00:13:09.296 SPDK bdev Controller (SPDK1 ) core 3: 7467.33 IO/s 13.39 secs/100000 ios 00:13:09.296 ======================================================== 00:13:09.296 00:13:09.296 17:16:38 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:09.296 [2024-04-25 17:16:39.245766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:09.554 [2024-04-25 17:16:39.279220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:09.554 Initializing NVMe Controllers 00:13:09.554 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.554 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.554 Namespace ID: 1 size: 0GB 00:13:09.554 Initialization complete. 00:13:09.554 INFO: using host memory buffer for IO 00:13:09.554 Hello world! 00:13:09.554 17:16:39 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:09.812 [2024-04-25 17:16:39.585848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:10.749 Initializing NVMe Controllers 00:13:10.749 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.749 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.749 Initialization complete. Launching workers. 00:13:10.749 submit (in ns) avg, min, max = 6944.2, 3119.1, 7012813.6 00:13:10.749 complete (in ns) avg, min, max = 28857.8, 2012.7, 7042380.0 00:13:10.749 00:13:10.749 Submit histogram 00:13:10.749 ================ 00:13:10.749 Range in us Cumulative Count 00:13:10.749 3.113 - 3.127: 0.0297% ( 4) 00:13:10.749 3.127 - 3.142: 0.0446% ( 2) 00:13:10.749 3.142 - 3.156: 0.1636% ( 16) 00:13:10.749 3.156 - 3.171: 0.7287% ( 76) 00:13:10.749 3.171 - 3.185: 1.6211% ( 120) 00:13:10.749 3.185 - 3.200: 4.5583% ( 395) 00:13:10.749 3.200 - 3.215: 9.3992% ( 651) 00:13:10.749 3.215 - 3.229: 13.4444% ( 544) 00:13:10.749 3.229 - 3.244: 17.3929% ( 531) 00:13:10.749 3.244 - 3.258: 22.9774% ( 751) 00:13:10.749 3.258 - 3.273: 27.2457% ( 574) 00:13:10.749 3.273 - 3.287: 30.7852% ( 476) 00:13:10.749 3.287 - 3.302: 34.4363% ( 491) 00:13:10.749 3.302 - 3.316: 37.8049% ( 453) 00:13:10.749 3.316 - 3.331: 41.0173% ( 432) 00:13:10.749 3.331 - 3.345: 43.2332% ( 298) 00:13:10.749 3.345 - 3.360: 45.8358% ( 350) 00:13:10.749 3.360 - 3.375: 47.5089% ( 225) 00:13:10.749 3.375 - 3.389: 49.2713% ( 237) 00:13:10.749 3.389 - 3.404: 51.1749% ( 256) 00:13:10.749 3.404 - 3.418: 52.3498% ( 158) 00:13:10.749 3.418 - 3.433: 53.6065% ( 169) 00:13:10.749 3.433 - 3.447: 55.1086% ( 202) 00:13:10.749 3.447 - 3.462: 56.3727% ( 170) 00:13:10.749 3.462 - 3.476: 57.5253% ( 155) 00:13:10.749 3.476 - 3.491: 59.5256% ( 269) 00:13:10.749 3.491 - 3.505: 62.0241% ( 336) 00:13:10.749 3.505 - 3.520: 63.8459% ( 245) 00:13:10.749 3.520 - 3.535: 65.9503% ( 283) 00:13:10.749 3.535 - 3.549: 68.2035% ( 303) 00:13:10.749 3.549 - 3.564: 70.0030% ( 242) 00:13:10.749 3.564 - 3.578: 71.3563% ( 182) 00:13:10.749 3.578 - 3.593: 72.6279% ( 171) 00:13:10.749 3.593 - 3.607: 74.4274% ( 242) 00:13:10.749 3.607 - 3.622: 76.1600% ( 233) 00:13:10.749 3.622 - 3.636: 77.4762% ( 177) 00:13:10.749 3.636 - 3.651: 79.1270% ( 222) 00:13:10.749 3.651 - 3.665: 80.4804% ( 182) 00:13:10.749 3.665 - 3.680: 81.5289% ( 141) 00:13:10.749 3.680 - 3.695: 82.5178% ( 133) 00:13:10.749 3.695 - 3.709: 83.4622% ( 127) 00:13:10.749 3.709 - 3.724: 85.2246% ( 237) 00:13:10.749 3.724 - 3.753: 88.0949% ( 386) 00:13:10.749 3.753 - 3.782: 91.3444% ( 437) 00:13:10.749 3.782 - 3.811: 93.1217% ( 239) 00:13:10.749 3.811 - 3.840: 93.9024% ( 105) 00:13:10.749 3.840 - 3.869: 94.3189% ( 56) 00:13:10.749 3.869 - 3.898: 94.5940% ( 37) 00:13:10.749 3.898 - 3.927: 94.7650% ( 23) 00:13:10.749 3.927 - 3.956: 94.9137% ( 20) 00:13:10.749 3.956 - 3.985: 95.0550% ( 19) 00:13:10.749 3.985 - 4.015: 95.1517% ( 13) 00:13:10.749 4.015 - 4.044: 95.2632% ( 15) 00:13:10.749 4.044 - 4.073: 95.3376% ( 10) 00:13:10.749 4.073 - 4.102: 95.4120% ( 10) 00:13:10.749 4.102 - 4.131: 95.4938% ( 11) 00:13:10.749 4.131 - 4.160: 95.5756% ( 11) 00:13:10.749 4.160 - 4.189: 95.6648% ( 12) 00:13:10.749 4.189 - 4.218: 95.7243% ( 8) 00:13:10.749 4.218 - 4.247: 95.8284% ( 14) 00:13:10.749 4.247 - 4.276: 95.8804% ( 7) 00:13:10.749 4.276 - 4.305: 95.9771% ( 13) 00:13:10.749 4.305 - 4.335: 96.0515% ( 10) 00:13:10.749 4.335 - 4.364: 96.0812% ( 4) 00:13:10.749 4.364 - 4.393: 96.1481% ( 9) 00:13:10.749 4.393 - 4.422: 96.2002% ( 7) 00:13:10.749 4.422 - 4.451: 96.2820% ( 11) 00:13:10.749 4.451 - 4.480: 96.3415% ( 8) 00:13:10.749 4.480 - 4.509: 96.4010% ( 8) 00:13:10.749 4.509 - 4.538: 96.4679% ( 9) 00:13:10.749 4.538 - 4.567: 96.5571% ( 12) 00:13:10.749 4.567 - 4.596: 96.6761% ( 16) 00:13:10.749 4.596 - 4.625: 96.7356% ( 8) 00:13:10.749 4.625 - 4.655: 96.7876% ( 7) 00:13:10.749 4.655 - 4.684: 96.8546% ( 9) 00:13:10.749 4.684 - 4.713: 96.9512% ( 13) 00:13:10.749 4.713 - 4.742: 97.0256% ( 10) 00:13:10.749 4.742 - 4.771: 97.0628% ( 5) 00:13:10.749 4.771 - 4.800: 97.1074% ( 6) 00:13:10.749 4.800 - 4.829: 97.1297% ( 3) 00:13:10.749 4.829 - 4.858: 97.1743% ( 6) 00:13:10.749 4.858 - 4.887: 97.1892% ( 2) 00:13:10.749 4.887 - 4.916: 97.1966% ( 1) 00:13:10.749 4.945 - 4.975: 97.2040% ( 1) 00:13:10.749 4.975 - 5.004: 97.2189% ( 2) 00:13:10.749 5.004 - 5.033: 97.2338% ( 2) 00:13:10.749 5.062 - 5.091: 97.2487% ( 2) 00:13:10.749 5.207 - 5.236: 97.2561% ( 1) 00:13:10.749 5.236 - 5.265: 97.2635% ( 1) 00:13:10.749 5.324 - 5.353: 97.2710% ( 1) 00:13:10.749 5.411 - 5.440: 97.2784% ( 1) 00:13:10.749 5.818 - 5.847: 97.2933% ( 2) 00:13:10.749 6.953 - 6.982: 97.3007% ( 1) 00:13:10.749 7.244 - 7.273: 97.3156% ( 2) 00:13:10.749 7.505 - 7.564: 97.3230% ( 1) 00:13:10.749 7.564 - 7.622: 97.3305% ( 1) 00:13:10.749 7.680 - 7.738: 97.3453% ( 2) 00:13:10.749 7.738 - 7.796: 97.3528% ( 1) 00:13:10.749 7.796 - 7.855: 97.3602% ( 1) 00:13:10.749 8.204 - 8.262: 97.3751% ( 2) 00:13:10.749 8.262 - 8.320: 97.3825% ( 1) 00:13:10.749 8.320 - 8.378: 97.3974% ( 2) 00:13:10.749 8.436 - 8.495: 97.4048% ( 1) 00:13:10.749 8.611 - 8.669: 97.4123% ( 1) 00:13:10.749 8.669 - 8.727: 97.4346% ( 3) 00:13:10.749 8.785 - 8.844: 97.4420% ( 1) 00:13:10.749 8.844 - 8.902: 97.4569% ( 2) 00:13:10.749 8.902 - 8.960: 97.4866% ( 4) 00:13:10.749 8.960 - 9.018: 97.4941% ( 1) 00:13:10.749 9.018 - 9.076: 97.5089% ( 2) 00:13:10.749 9.076 - 9.135: 97.5164% ( 1) 00:13:10.749 9.251 - 9.309: 97.5312% ( 2) 00:13:10.749 9.367 - 9.425: 97.5535% ( 3) 00:13:10.749 9.425 - 9.484: 97.5610% ( 1) 00:13:10.749 9.542 - 9.600: 97.5758% ( 2) 00:13:10.749 9.658 - 9.716: 97.5907% ( 2) 00:13:10.749 9.775 - 9.833: 97.6056% ( 2) 00:13:10.749 10.065 - 10.124: 97.6130% ( 1) 00:13:10.749 10.240 - 10.298: 97.6205% ( 1) 00:13:10.749 10.298 - 10.356: 97.6279% ( 1) 00:13:10.749 10.647 - 10.705: 97.6353% ( 1) 00:13:10.749 10.880 - 10.938: 97.6428% ( 1) 00:13:10.749 11.404 - 11.462: 97.6502% ( 1) 00:13:10.749 11.520 - 11.578: 97.6576% ( 1) 00:13:10.749 12.335 - 12.393: 97.6651% ( 1) 00:13:10.749 12.684 - 12.742: 97.6725% ( 1) 00:13:10.749 13.091 - 13.149: 97.6800% ( 1) 00:13:10.749 13.731 - 13.789: 97.6874% ( 1) 00:13:10.749 13.847 - 13.905: 97.6948% ( 1) 00:13:10.749 14.196 - 14.255: 97.7023% ( 1) 00:13:10.749 14.255 - 14.313: 97.7097% ( 1) 00:13:10.749 14.371 - 14.429: 97.7246% ( 2) 00:13:10.749 14.429 - 14.487: 97.7320% ( 1) 00:13:10.749 14.545 - 14.604: 97.7394% ( 1) 00:13:10.749 14.604 - 14.662: 97.7543% ( 2) 00:13:10.749 14.778 - 14.836: 97.7617% ( 1) 00:13:10.749 14.836 - 14.895: 97.7841% ( 3) 00:13:10.749 14.895 - 15.011: 97.7989% ( 2) 00:13:10.749 15.011 - 15.127: 97.8064% ( 1) 00:13:10.749 17.222 - 17.338: 97.8212% ( 2) 00:13:10.749 17.455 - 17.571: 97.8435% ( 3) 00:13:10.749 17.571 - 17.687: 97.9105% ( 9) 00:13:10.749 17.687 - 17.804: 97.9997% ( 12) 00:13:10.749 17.804 - 17.920: 98.1261% ( 17) 00:13:10.749 17.920 - 18.036: 98.2525% ( 17) 00:13:10.749 18.036 - 18.153: 98.4087% ( 21) 00:13:10.749 18.153 - 18.269: 98.5797% ( 23) 00:13:10.749 18.269 - 18.385: 98.6764% ( 13) 00:13:10.749 18.385 - 18.502: 98.8028% ( 17) 00:13:10.749 18.502 - 18.618: 98.9515% ( 20) 00:13:10.749 18.618 - 18.735: 99.0779% ( 17) 00:13:10.749 18.735 - 18.851: 99.2415% ( 22) 00:13:10.749 18.851 - 18.967: 99.3233% ( 11) 00:13:10.749 18.967 - 19.084: 99.4274% ( 14) 00:13:10.749 19.084 - 19.200: 99.5390% ( 15) 00:13:10.749 19.200 - 19.316: 99.5985% ( 8) 00:13:10.750 19.316 - 19.433: 99.7174% ( 16) 00:13:10.750 19.433 - 19.549: 99.7546% ( 5) 00:13:10.750 19.549 - 19.665: 99.8141% ( 8) 00:13:10.750 19.665 - 19.782: 99.8290% ( 2) 00:13:10.750 19.782 - 19.898: 99.8810% ( 7) 00:13:10.750 19.898 - 20.015: 99.8959% ( 2) 00:13:10.750 20.015 - 20.131: 99.9108% ( 2) 00:13:10.750 20.131 - 20.247: 99.9182% ( 1) 00:13:10.750 20.247 - 20.364: 99.9256% ( 1) 00:13:10.750 3038.487 - 3053.382: 99.9331% ( 1) 00:13:10.750 3961.949 - 3991.738: 99.9479% ( 2) 00:13:10.750 3991.738 - 4021.527: 99.9628% ( 2) 00:13:10.750 4021.527 - 4051.316: 99.9851% ( 3) 00:13:10.750 4051.316 - 4081.105: 99.9926% ( 1) 00:13:10.750 7000.436 - 7030.225: 100.0000% ( 1) 00:13:10.750 00:13:10.750 Complete histogram 00:13:10.750 ================== 00:13:10.750 Range in us Cumulative Count 00:13:10.750 2.007 - 2.022: 1.0857% ( 146) 00:13:10.750 2.022 - 2.036: 18.5232% ( 2345) 00:13:10.750 2.036 - 2.051: 32.2278% ( 1843) 00:13:10.750 2.051 - 2.065: 33.3061% ( 145) 00:13:10.750 2.065 - 2.080: 33.6927% ( 52) 00:13:10.750 2.080 - 2.095: 40.2365% ( 880) 00:13:10.750 2.095 - 2.109: 47.4048% ( 964) 00:13:10.750 2.109 - 2.124: 50.6841% ( 441) 00:13:10.750 2.124 - 2.138: 50.9369% ( 34) 00:13:10.750 2.138 - 2.153: 51.4277% ( 66) 00:13:10.750 2.153 - 2.167: 55.4952% ( 547) 00:13:10.750 2.167 - 2.182: 58.8266% ( 448) 00:13:10.750 2.182 - 2.196: 59.1538% ( 44) 00:13:10.750 2.196 - 2.211: 59.3174% ( 22) 00:13:10.750 2.211 - 2.225: 59.9866% ( 90) 00:13:10.750 2.225 - 2.240: 66.9393% ( 935) 00:13:10.750 2.240 - 2.255: 73.2451% ( 848) 00:13:10.750 2.255 - 2.269: 73.8995% ( 88) 00:13:10.750 2.269 - 2.284: 73.9292% ( 4) 00:13:10.750 2.284 - 2.298: 74.0631% ( 18) 00:13:10.750 2.298 - 2.313: 76.7029% ( 355) 00:13:10.750 2.313 - 2.327: 82.5178% ( 782) 00:13:10.750 2.327 - 2.342: 84.1017% ( 213) 00:13:10.750 2.342 - 2.356: 84.2430% ( 19) 00:13:10.750 2.356 - 2.371: 84.3322% ( 12) 00:13:10.750 2.371 - 2.385: 84.6817% ( 47) 00:13:10.750 2.385 - 2.400: 88.9798% ( 578) 00:13:10.750 2.400 - 2.415: 94.8543% ( 790) 00:13:10.750 2.415 - 2.429: 95.6053% ( 101) 00:13:10.750 2.429 - 2.444: 95.6797% ( 10) 00:13:10.750 2.444 - 2.458: 95.8656% ( 25) 00:13:10.750 2.458 - 2.473: 96.0366% ( 23) 00:13:10.750 2.473 - 2.487: 96.2002% ( 22) 00:13:10.750 2.487 - 2.502: 96.2448% ( 6) 00:13:10.750 2.502 - 2.516: 96.2745% ( 4) 00:13:10.750 2.516 - 2.531: 96.3192% ( 6) 00:13:10.750 2.531 - 2.545: 96.3266% ( 1) 00:13:10.750 2.545 - 2.560: 96.3415% ( 2) 00:13:10.750 2.560 - 2.575: 96.3489% ( 1) 00:13:10.750 2.575 - 2.589: 96.3563% ( 1) 00:13:10.750 2.604 - 2.618: 96.3712% ( 2) 00:13:10.750 2.633 - 2.647: 96.3786% ( 1) 00:13:10.750 2.647 - 2.662: 96.3935% ( 2) 00:13:10.750 2.662 - 2.676: 96.4010% ( 1) 00:13:10.750 2.676 - 2.691: 96.4084% ( 1) 00:13:10.750 2.691 - 2.705: 96.4456% ( 5) 00:13:10.750 2.705 - 2.720: 96.4679% ( 3) 00:13:10.750 2.720 - 2.735: 96.4902% ( 3) 00:13:10.750 2.735 - 2.749: 96.5051% ( 2) 00:13:10.750 2.764 - 2.778: 96.5348% ( 4) 00:13:10.750 2.778 - 2.793: 96.5422% ( 1) 00:13:10.750 2.793 - 2.807: 96.5720% ( 4) 00:13:10.750 2.807 - 2.822: 96.6166% ( 6) 00:13:10.750 2.822 - 2.836: 96.6463% ( 4) 00:13:10.750 2.836 - 2.851: 96.6538% ( 1) 00:13:10.750 2.851 - 2.865: 96.7504% ( 13) 00:13:10.750 2.865 - 2.880: 96.8174% ( 9) 00:13:10.750 2.880 - 2.895: 96.8322% ( 2) 00:13:10.750 2.895 - 2.909: 96.9215% ( 12) 00:13:10.750 2.909 - 2.924: 96.9810% ( 8) 00:13:10.750 2.924 - 2.938: 97.0033% ( 3) 00:13:10.750 2.938 - 2.953: 97.0181% ( 2) 00:13:10.750 2.953 - 2.967: 97.0553% ( 5) 00:13:10.750 2.967 - 2.982: 97.0776% ( 3) 00:13:10.750 2.982 - 2.996: 97.1222% ( 6) 00:13:10.750 2.996 - 3.011: 97.1594% ( 5) 00:13:10.750 3.011 - 3.025: 97.2115% ( 7) 00:13:10.750 3.025 - 3.040: 97.2264% ( 2) 00:13:10.750 3.040 - 3.055: 97.2412% ( 2) 00:13:10.750 3.055 - 3.069: 97.2710% ( 4) 00:13:10.750 3.069 - 3.084: 97.3081% ( 5) 00:13:10.750 3.084 - 3.098: 97.3379% ( 4) 00:13:10.750 3.098 - 3.113: 97.3676% ( 4) 00:13:10.750 3.113 - 3.127: 97.3899% ( 3) 00:13:10.750 3.185 - 3.200: 97.3974% ( 1) 00:13:10.750 3.215 - 3.229: 97.4048% ( 1) 00:13:10.750 3.229 - 3.244: 97.4123% ( 1) 00:13:10.750 3.287 - 3.302: 97.4197% ( 1) 00:13:10.750 3.302 - 3.316: 97.4271% ( 1) 00:13:10.750 3.564 - 3.578: 97.4346% ( 1) 00:13:10.750 3.578 - 3.593: 97.4494% ( 2) 00:13:10.750 3.593 - 3.607: 97.4569% ( 1) 00:13:10.750 3.622 - 3.636: 97.4643% ( 1) 00:13:10.750 3.651 - 3.665: 97.4717% ( 1) 00:13:10.750 3.665 - 3.680: 97.4792% ( 1) 00:13:10.750 3.724 - 3.753: 97.4941% ( 2) 00:13:10.750 3.753 - 3.782: 97.5015% ( 1) 00:13:10.750 3.782 - 3.811: 97.5089% ( 1) 00:13:10.750 3.840 - 3.869: 97.5164% ( 1) 00:13:10.750 3.898 - 3.927: 97.5238% ( 1) 00:13:10.750 3.927 - 3.956: 97.5461% ( 3) 00:13:10.750 3.956 - 3.985: 97.5535% ( 1) 00:13:10.750 3.985 - 4.015: 97.5610% ( 1) 00:13:10.750 4.044 - 4.073: 97.5684% ( 1) 00:13:10.750 4.131 - 4.160: 97.5758% ( 1) 00:13:10.750 4.276 - 4.305: 97.5833% ( 1) 00:13:10.750 4.305 - 4.335: 97.5907% ( 1) 00:13:10.750 4.509 - 4.538: 97.5982% ( 1) 00:13:10.750 4.625 - 4.655: 97.6056% ( 1) 00:13:10.750 4.771 - 4.800: 97.6130% ( 1) 00:13:10.750 4.887 - 4.916: 97.6205% ( 1) 00:13:10.750 5.847 - 5.876: 97.6279% ( 1) 00:13:10.750 6.022 - 6.051: 97.6353% ( 1) 00:13:10.750 6.167 - 6.196: 97.6428% ( 1) 00:13:10.750 6.196 - 6.225: 97.6576% ( 2) 00:13:10.750 6.255 - 6.284: 97.6651% ( 1) 00:13:10.750 6.313 - 6.342: 97.6800% ( 2) 00:13:10.750 6.342 - 6.371: 97.6874% ( 1) 00:13:10.750 6.371 - 6.400: 97.6948% ( 1) 00:13:10.750 6.458 - 6.487: 97.7097% ( 2) 00:13:10.750 6.633 - 6.662: 97.7171% ( 1) 00:13:10.750 6.836 - 6.865: 97.7246% ( 1) 00:13:10.750 6.924 - 6.953: 97.7320% ( 1) 00:13:10.750 6.982 - 7.011: 97.7394% ( 1) 00:13:10.750 7.156 - 7.185: 97.7469% ( 1) 00:13:10.750 7.273 - 7.302: 97.7543% ( 1) 00:13:10.750 7.418 - 7.447: 97.7766% ( 3) 00:13:10.750 7.447 - 7.505: 97.7915% ( 2) 00:13:10.750 7.505 - 7.564: 97.8064% ( 2) 00:13:10.750 7.680 - 7.738: 97.8212% ( 2) 00:13:10.750 7.738 - 7.796: 97.8435% ( 3) 00:13:10.750 7.855 - 7.913: 97.8510% ( 1) 00:13:10.750 7.971 - 8.029: 97.8733% ( 3) 00:13:10.750 8.087 - 8.145: 97.8807% ( 1) 00:13:10.750 8.145 - 8.204: 97.8882% ( 1) 00:13:10.750 8.320 - 8.378: 97.8956% ( 1) 00:13:10.750 8.378 - 8.436: 97.9030% ( 1) 00:13:10.750 8.553 - 8.611: 97.9105% ( 1) 00:13:10.750 8.611 - 8.669: 97.9179% ( 1) 00:13:10.750 8.785 - 8.844: 97.9253% ( 1) 00:13:10.750 8.844 - 8.902: 97.9328% ( 1) 00:13:10.750 8.902 - 8.960: 97.9402% ( 1) 00:13:10.750 8.960 - 9.018: 97.9477% ( 1) 00:13:10.750 9.193 - 9.251: 97.9551% ( 1) 00:13:10.750 9.309 - 9.367: 97.9700% ( 2) 00:13:10.750 9.367 - 9.425: 97.9774% ( 1) 00:13:10.750 9.542 - 9.600: 97.9923% ( 2) 00:13:10.750 9.716 - 9.775: 97.9997% ( 1) 00:13:10.750 10.473 - 10.531: 98.0071% ( 1) 00:13:10.750 10.880 - 10.938: 98.0220% ( 2) 00:13:10.750 10.996 - 11.055: 98.0294% ( 1) 00:13:10.750 11.287 - 11.345: 98.0369% ( 1) 00:13:10.750 11.345 - 11.404: 98.0443% ( 1) 00:13:10.750 11.695 - 11.753: 98.0518% ( 1) 00:13:10.750 11.811 - 11.869: 98.0592% ( 1) 00:13:10.750 12.102 - 12.160: 98.0666% ( 1) 00:13:10.750 12.160 - 12.218: 98.0815% ( 2) 00:13:10.750 12.625 - 12.684: 98.0889% ( 1) 00:13:10.750 12.858 - 12.916: 98.0964% ( 1) 00:13:10.750 13.033 - 13.091: 98.1038% ( 1) 00:13:10.750 13.091 - 13.149: 98.1112% ( 1) 00:13:10.750 13.207 - 13.265: 98.1187% ( 1) 00:13:10.750 13.324 - 13.382: 98.1336% ( 2) 00:13:10.750 13.440 - 13.498: 98.1410% ( 1) 00:13:10.750 13.498 - 13.556: 98.1559% ( 2) 00:13:10.750 13.673 - 13.731: 98.1633% ( 1) 00:13:10.750 14.022 - 14.080: 98.1707% ( 1) 00:13:10.750 15.942 - 16.058: 98.1782% ( 1) 00:13:10.750 16.291 - 16.407: 98.2377% ( 8) 00:13:10.750 16.407 - 16.524: 98.2748% ( 5) 00:13:10.750 16.524 - 16.640: 98.3492% ( 10) 00:13:10.750 16.640 - 16.756: 98.4533% ( 14) 00:13:10.750 16.756 - 16.873: 98.5351% ( 11) 00:13:10.750 16.873 - 16.989: 98.5872% ( 7) 00:13:10.750 16.989 - 17.105: 98.6764% ( 12) 00:13:10.750 17.105 - 17.222: 98.7656% ( 12) 00:13:10.751 17.222 - 17.338: 98.8028% ( 5) 00:13:10.751 17.338 - 17.455: 98.8623%[2024-04-25 17:16:40.606935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:10.751 ( 8) 00:13:10.751 17.455 - 17.571: 98.9143% ( 7) 00:13:10.751 17.571 - 17.687: 99.0036% ( 12) 00:13:10.751 17.687 - 17.804: 99.0705% ( 9) 00:13:10.751 17.804 - 17.920: 99.1597% ( 12) 00:13:10.751 17.920 - 18.036: 99.1969% ( 5) 00:13:10.751 18.036 - 18.153: 99.2118% ( 2) 00:13:10.751 18.153 - 18.269: 99.2415% ( 4) 00:13:10.751 18.269 - 18.385: 99.2638% ( 3) 00:13:10.751 18.385 - 18.502: 99.2787% ( 2) 00:13:10.751 18.502 - 18.618: 99.2936% ( 2) 00:13:10.751 18.618 - 18.735: 99.3084% ( 2) 00:13:10.751 18.735 - 18.851: 99.3233% ( 2) 00:13:10.751 19.316 - 19.433: 99.3308% ( 1) 00:13:10.751 19.433 - 19.549: 99.3382% ( 1) 00:13:10.751 3008.698 - 3023.593: 99.3456% ( 1) 00:13:10.751 3023.593 - 3038.487: 99.4200% ( 10) 00:13:10.751 3038.487 - 3053.382: 99.4572% ( 5) 00:13:10.751 3053.382 - 3068.276: 99.4646% ( 1) 00:13:10.751 3068.276 - 3083.171: 99.4795% ( 2) 00:13:10.751 3083.171 - 3098.065: 99.4869% ( 1) 00:13:10.751 3112.960 - 3127.855: 99.4943% ( 1) 00:13:10.751 3842.793 - 3872.582: 99.5018% ( 1) 00:13:10.751 3872.582 - 3902.371: 99.5092% ( 1) 00:13:10.751 3932.160 - 3961.949: 99.5315% ( 3) 00:13:10.751 3961.949 - 3991.738: 99.6505% ( 16) 00:13:10.751 3991.738 - 4021.527: 99.8215% ( 23) 00:13:10.751 4021.527 - 4051.316: 99.9405% ( 16) 00:13:10.751 4051.316 - 4081.105: 99.9479% ( 1) 00:13:10.751 6047.185 - 6076.975: 99.9554% ( 1) 00:13:10.751 7000.436 - 7030.225: 99.9926% ( 5) 00:13:10.751 7030.225 - 7060.015: 100.0000% ( 1) 00:13:10.751 00:13:10.751 17:16:40 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:10.751 17:16:40 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:10.751 17:16:40 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:10.751 17:16:40 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:10.751 17:16:40 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.009 [2024-04-25 17:16:40.915276] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:11.009 [ 00:13:11.009 { 00:13:11.009 "allow_any_host": true, 00:13:11.009 "hosts": [], 00:13:11.009 "listen_addresses": [], 00:13:11.009 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:11.009 "subtype": "Discovery" 00:13:11.009 }, 00:13:11.009 { 00:13:11.009 "allow_any_host": true, 00:13:11.009 "hosts": [], 00:13:11.009 "listen_addresses": [ 00:13:11.009 { 00:13:11.009 "adrfam": "IPv4", 00:13:11.009 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:11.009 "transport": "VFIOUSER", 00:13:11.009 "trsvcid": "0", 00:13:11.009 "trtype": "VFIOUSER" 00:13:11.009 } 00:13:11.009 ], 00:13:11.009 "max_cntlid": 65519, 00:13:11.009 "max_namespaces": 32, 00:13:11.009 "min_cntlid": 1, 00:13:11.009 "model_number": "SPDK bdev Controller", 00:13:11.009 "namespaces": [ 00:13:11.009 { 00:13:11.009 "bdev_name": "Malloc1", 00:13:11.009 "name": "Malloc1", 00:13:11.009 "nguid": "EBC17231AB2C4D3D86F978C3D2A1D4A4", 00:13:11.009 "nsid": 1, 00:13:11.009 "uuid": "ebc17231-ab2c-4d3d-86f9-78c3d2a1d4a4" 00:13:11.009 } 00:13:11.009 ], 00:13:11.009 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:11.009 "serial_number": "SPDK1", 00:13:11.009 "subtype": "NVMe" 00:13:11.009 }, 00:13:11.009 { 00:13:11.009 "allow_any_host": true, 00:13:11.009 "hosts": [], 00:13:11.009 "listen_addresses": [ 00:13:11.009 { 00:13:11.009 "adrfam": "IPv4", 00:13:11.009 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:11.009 "transport": "VFIOUSER", 00:13:11.009 "trsvcid": "0", 00:13:11.009 "trtype": "VFIOUSER" 00:13:11.009 } 00:13:11.009 ], 00:13:11.009 "max_cntlid": 65519, 00:13:11.009 "max_namespaces": 32, 00:13:11.009 "min_cntlid": 1, 00:13:11.009 "model_number": "SPDK bdev Controller", 00:13:11.009 "namespaces": [ 00:13:11.009 { 00:13:11.009 "bdev_name": "Malloc2", 00:13:11.009 "name": "Malloc2", 00:13:11.009 "nguid": "01C3005E4A6B4BCABF830084165FE3AC", 00:13:11.009 "nsid": 1, 00:13:11.009 "uuid": "01c3005e-4a6b-4bca-bf83-0084165fe3ac" 00:13:11.009 } 00:13:11.009 ], 00:13:11.009 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:11.009 "serial_number": "SPDK2", 00:13:11.009 "subtype": "NVMe" 00:13:11.009 } 00:13:11.009 ] 00:13:11.009 17:16:40 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:11.009 17:16:40 -- target/nvmf_vfio_user.sh@34 -- # aerpid=75090 00:13:11.009 17:16:40 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:11.009 17:16:40 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:11.009 17:16:40 -- common/autotest_common.sh@1251 -- # local i=0 00:13:11.009 17:16:40 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.009 17:16:40 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:13:11.009 17:16:40 -- common/autotest_common.sh@1254 -- # i=1 00:13:11.009 17:16:40 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:13:11.267 17:16:41 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.267 17:16:41 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:13:11.267 17:16:41 -- common/autotest_common.sh@1254 -- # i=2 00:13:11.267 17:16:41 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:13:11.267 [2024-04-25 17:16:41.150231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:11.267 17:16:41 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.267 17:16:41 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:11.267 17:16:41 -- common/autotest_common.sh@1262 -- # return 0 00:13:11.267 17:16:41 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:11.267 17:16:41 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:11.525 Malloc3 00:13:11.525 17:16:41 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:11.784 [2024-04-25 17:16:41.664105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:11.784 17:16:41 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:11.784 Asynchronous Event Request test 00:13:11.784 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.784 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:11.784 Registering asynchronous event callbacks... 00:13:11.784 Starting namespace attribute notice tests for all controllers... 00:13:11.784 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:11.784 aer_cb - Changed Namespace 00:13:11.784 Cleaning up... 00:13:12.042 [ 00:13:12.042 { 00:13:12.042 "allow_any_host": true, 00:13:12.042 "hosts": [], 00:13:12.042 "listen_addresses": [], 00:13:12.042 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:12.042 "subtype": "Discovery" 00:13:12.042 }, 00:13:12.042 { 00:13:12.042 "allow_any_host": true, 00:13:12.042 "hosts": [], 00:13:12.042 "listen_addresses": [ 00:13:12.042 { 00:13:12.042 "adrfam": "IPv4", 00:13:12.042 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:12.042 "transport": "VFIOUSER", 00:13:12.042 "trsvcid": "0", 00:13:12.042 "trtype": "VFIOUSER" 00:13:12.042 } 00:13:12.042 ], 00:13:12.042 "max_cntlid": 65519, 00:13:12.042 "max_namespaces": 32, 00:13:12.042 "min_cntlid": 1, 00:13:12.042 "model_number": "SPDK bdev Controller", 00:13:12.042 "namespaces": [ 00:13:12.042 { 00:13:12.042 "bdev_name": "Malloc1", 00:13:12.042 "name": "Malloc1", 00:13:12.042 "nguid": "EBC17231AB2C4D3D86F978C3D2A1D4A4", 00:13:12.042 "nsid": 1, 00:13:12.042 "uuid": "ebc17231-ab2c-4d3d-86f9-78c3d2a1d4a4" 00:13:12.042 }, 00:13:12.042 { 00:13:12.042 "bdev_name": "Malloc3", 00:13:12.042 "name": "Malloc3", 00:13:12.042 "nguid": "6483EA0687C84E3D84A5F39B39461DBB", 00:13:12.042 "nsid": 2, 00:13:12.042 "uuid": "6483ea06-87c8-4e3d-84a5-f39b39461dbb" 00:13:12.042 } 00:13:12.042 ], 00:13:12.042 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:12.042 "serial_number": "SPDK1", 00:13:12.042 "subtype": "NVMe" 00:13:12.042 }, 00:13:12.042 { 00:13:12.042 "allow_any_host": true, 00:13:12.042 "hosts": [], 00:13:12.042 "listen_addresses": [ 00:13:12.042 { 00:13:12.042 "adrfam": "IPv4", 00:13:12.042 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:12.042 "transport": "VFIOUSER", 00:13:12.042 "trsvcid": "0", 00:13:12.042 "trtype": "VFIOUSER" 00:13:12.042 } 00:13:12.042 ], 00:13:12.042 "max_cntlid": 65519, 00:13:12.042 "max_namespaces": 32, 00:13:12.042 "min_cntlid": 1, 00:13:12.042 "model_number": "SPDK bdev Controller", 00:13:12.042 "namespaces": [ 00:13:12.042 { 00:13:12.042 "bdev_name": "Malloc2", 00:13:12.042 "name": "Malloc2", 00:13:12.042 "nguid": "01C3005E4A6B4BCABF830084165FE3AC", 00:13:12.042 "nsid": 1, 00:13:12.042 "uuid": "01c3005e-4a6b-4bca-bf83-0084165fe3ac" 00:13:12.042 } 00:13:12.042 ], 00:13:12.042 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:12.042 "serial_number": "SPDK2", 00:13:12.042 "subtype": "NVMe" 00:13:12.042 } 00:13:12.042 ] 00:13:12.042 17:16:41 -- target/nvmf_vfio_user.sh@44 -- # wait 75090 00:13:12.042 17:16:41 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.042 17:16:41 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:12.042 17:16:41 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:12.042 17:16:41 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:12.042 [2024-04-25 17:16:41.970039] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:12.042 [2024-04-25 17:16:41.970093] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75127 ] 00:13:12.302 [2024-04-25 17:16:42.110458] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:12.302 [2024-04-25 17:16:42.117958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.302 [2024-04-25 17:16:42.118006] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f503f412000 00:13:12.302 [2024-04-25 17:16:42.118958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.119957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.120962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.121963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.122974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.123977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.124983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.125985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.302 [2024-04-25 17:16:42.127000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.302 [2024-04-25 17:16:42.127026] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f503f407000 00:13:12.302 [2024-04-25 17:16:42.128339] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.302 [2024-04-25 17:16:42.142473] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:12.302 [2024-04-25 17:16:42.142526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:12.302 [2024-04-25 17:16:42.147620] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:12.302 [2024-04-25 17:16:42.147694] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:12.302 [2024-04-25 17:16:42.147780] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:12.302 [2024-04-25 17:16:42.147805] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:12.302 [2024-04-25 17:16:42.147811] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:12.302 [2024-04-25 17:16:42.148625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:12.302 [2024-04-25 17:16:42.148667] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:12.302 [2024-04-25 17:16:42.148678] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:12.302 [2024-04-25 17:16:42.149626] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:12.302 [2024-04-25 17:16:42.149666] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:12.302 [2024-04-25 17:16:42.149678] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.150634] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:12.302 [2024-04-25 17:16:42.150674] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.151638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:12.302 [2024-04-25 17:16:42.151676] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:12.302 [2024-04-25 17:16:42.151683] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.151693] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.151799] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:12.302 [2024-04-25 17:16:42.151805] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.151810] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:12.302 [2024-04-25 17:16:42.153743] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:12.302 [2024-04-25 17:16:42.154662] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:12.302 [2024-04-25 17:16:42.155663] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:12.302 [2024-04-25 17:16:42.156657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:12.302 [2024-04-25 17:16:42.156772] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:12.302 [2024-04-25 17:16:42.157665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:12.302 [2024-04-25 17:16:42.157702] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:12.302 [2024-04-25 17:16:42.157709] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.157738] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:12.302 [2024-04-25 17:16:42.157754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.157772] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.302 [2024-04-25 17:16:42.157779] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.302 [2024-04-25 17:16:42.157792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.302 [2024-04-25 17:16:42.163779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:12.302 [2024-04-25 17:16:42.163820] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:12.302 [2024-04-25 17:16:42.163826] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:12.302 [2024-04-25 17:16:42.163831] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:12.302 [2024-04-25 17:16:42.163836] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:12.302 [2024-04-25 17:16:42.163840] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:12.302 [2024-04-25 17:16:42.163845] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:12.302 [2024-04-25 17:16:42.163851] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.163860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.163872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:12.302 [2024-04-25 17:16:42.171744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:12.302 [2024-04-25 17:16:42.171790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.302 [2024-04-25 17:16:42.171801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.302 [2024-04-25 17:16:42.171808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.302 [2024-04-25 17:16:42.171816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.302 [2024-04-25 17:16:42.171822] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.171835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.171845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:12.302 [2024-04-25 17:16:42.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:12.302 [2024-04-25 17:16:42.179733] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:12.302 [2024-04-25 17:16:42.179755] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.179770] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.179777] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:12.302 [2024-04-25 17:16:42.179788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.302 [2024-04-25 17:16:42.186746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.186821] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.186834] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.186844] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:12.303 [2024-04-25 17:16:42.186849] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:12.303 [2024-04-25 17:16:42.186856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.194713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.194754] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:12.303 [2024-04-25 17:16:42.194768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.194780] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.194789] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.303 [2024-04-25 17:16:42.194794] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.303 [2024-04-25 17:16:42.194802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.201733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.201795] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.201807] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.201817] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.303 [2024-04-25 17:16:42.201822] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.303 [2024-04-25 17:16:42.201829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.209761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.209800] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209811] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209822] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209829] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209835] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209840] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:12.303 [2024-04-25 17:16:42.209845] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:12.303 [2024-04-25 17:16:42.209850] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:12.303 [2024-04-25 17:16:42.209869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.217716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.217760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.224750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.224792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.232764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.232806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.240747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.240793] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:12.303 [2024-04-25 17:16:42.240800] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:12.303 [2024-04-25 17:16:42.240804] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:12.303 [2024-04-25 17:16:42.240808] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:12.303 [2024-04-25 17:16:42.240815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:12.303 [2024-04-25 17:16:42.240823] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:12.303 [2024-04-25 17:16:42.240828] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:12.303 [2024-04-25 17:16:42.240834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.240841] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:12.303 [2024-04-25 17:16:42.240845] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.303 [2024-04-25 17:16:42.240851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.240859] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:12.303 [2024-04-25 17:16:42.240865] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:12.303 [2024-04-25 17:16:42.240870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:12.303 [2024-04-25 17:16:42.248750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.248797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.248811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:12.303 [2024-04-25 17:16:42.248819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:12.303 ===================================================== 00:13:12.303 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:12.303 ===================================================== 00:13:12.303 Controller Capabilities/Features 00:13:12.303 ================================ 00:13:12.303 Vendor ID: 4e58 00:13:12.303 Subsystem Vendor ID: 4e58 00:13:12.303 Serial Number: SPDK2 00:13:12.303 Model Number: SPDK bdev Controller 00:13:12.303 Firmware Version: 24.05 00:13:12.303 Recommended Arb Burst: 6 00:13:12.303 IEEE OUI Identifier: 8d 6b 50 00:13:12.303 Multi-path I/O 00:13:12.303 May have multiple subsystem ports: Yes 00:13:12.303 May have multiple controllers: Yes 00:13:12.303 Associated with SR-IOV VF: No 00:13:12.303 Max Data Transfer Size: 131072 00:13:12.303 Max Number of Namespaces: 32 00:13:12.303 Max Number of I/O Queues: 127 00:13:12.303 NVMe Specification Version (VS): 1.3 00:13:12.303 NVMe Specification Version (Identify): 1.3 00:13:12.303 Maximum Queue Entries: 256 00:13:12.303 Contiguous Queues Required: Yes 00:13:12.303 Arbitration Mechanisms Supported 00:13:12.303 Weighted Round Robin: Not Supported 00:13:12.303 Vendor Specific: Not Supported 00:13:12.303 Reset Timeout: 15000 ms 00:13:12.303 Doorbell Stride: 4 bytes 00:13:12.303 NVM Subsystem Reset: Not Supported 00:13:12.303 Command Sets Supported 00:13:12.303 NVM Command Set: Supported 00:13:12.303 Boot Partition: Not Supported 00:13:12.303 Memory Page Size Minimum: 4096 bytes 00:13:12.303 Memory Page Size Maximum: 4096 bytes 00:13:12.303 Persistent Memory Region: Not Supported 00:13:12.303 Optional Asynchronous Events Supported 00:13:12.303 Namespace Attribute Notices: Supported 00:13:12.303 Firmware Activation Notices: Not Supported 00:13:12.303 ANA Change Notices: Not Supported 00:13:12.303 PLE Aggregate Log Change Notices: Not Supported 00:13:12.303 LBA Status Info Alert Notices: Not Supported 00:13:12.303 EGE Aggregate Log Change Notices: Not Supported 00:13:12.303 Normal NVM Subsystem Shutdown event: Not Supported 00:13:12.303 Zone Descriptor Change Notices: Not Supported 00:13:12.303 Discovery Log Change Notices: Not Supported 00:13:12.303 Controller Attributes 00:13:12.303 128-bit Host Identifier: Supported 00:13:12.303 Non-Operational Permissive Mode: Not Supported 00:13:12.303 NVM Sets: Not Supported 00:13:12.303 Read Recovery Levels: Not Supported 00:13:12.303 Endurance Groups: Not Supported 00:13:12.303 Predictable Latency Mode: Not Supported 00:13:12.303 Traffic Based Keep ALive: Not Supported 00:13:12.303 Namespace Granularity: Not Supported 00:13:12.303 SQ Associations: Not Supported 00:13:12.303 UUID List: Not Supported 00:13:12.303 Multi-Domain Subsystem: Not Supported 00:13:12.303 Fixed Capacity Management: Not Supported 00:13:12.303 Variable Capacity Management: Not Supported 00:13:12.303 Delete Endurance Group: Not Supported 00:13:12.303 Delete NVM Set: Not Supported 00:13:12.303 Extended LBA Formats Supported: Not Supported 00:13:12.303 Flexible Data Placement Supported: Not Supported 00:13:12.303 00:13:12.303 Controller Memory Buffer Support 00:13:12.303 ================================ 00:13:12.303 Supported: No 00:13:12.304 00:13:12.304 Persistent Memory Region Support 00:13:12.304 ================================ 00:13:12.304 Supported: No 00:13:12.304 00:13:12.304 Admin Command Set Attributes 00:13:12.304 ============================ 00:13:12.304 Security Send/Receive: Not Supported 00:13:12.304 Format NVM: Not Supported 00:13:12.304 Firmware Activate/Download: Not Supported 00:13:12.304 Namespace Management: Not Supported 00:13:12.304 Device Self-Test: Not Supported 00:13:12.304 Directives: Not Supported 00:13:12.304 NVMe-MI: Not Supported 00:13:12.304 Virtualization Management: Not Supported 00:13:12.304 Doorbell Buffer Config: Not Supported 00:13:12.304 Get LBA Status Capability: Not Supported 00:13:12.304 Command & Feature Lockdown Capability: Not Supported 00:13:12.304 Abort Command Limit: 4 00:13:12.304 Async Event Request Limit: 4 00:13:12.304 Number of Firmware Slots: N/A 00:13:12.304 Firmware Slot 1 Read-Only: N/A 00:13:12.304 Firmware Activation Without Reset: N/A 00:13:12.304 Multiple Update Detection Support: N/A 00:13:12.304 Firmware Update Granularity: No Information Provided 00:13:12.304 Per-Namespace SMART Log: No 00:13:12.304 Asymmetric Namespace Access Log Page: Not Supported 00:13:12.304 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:12.304 Command Effects Log Page: Supported 00:13:12.304 Get Log Page Extended Data: Supported 00:13:12.304 Telemetry Log Pages: Not Supported 00:13:12.304 Persistent Event Log Pages: Not Supported 00:13:12.304 Supported Log Pages Log Page: May Support 00:13:12.304 Commands Supported & Effects Log Page: Not Supported 00:13:12.304 Feature Identifiers & Effects Log Page:May Support 00:13:12.304 NVMe-MI Commands & Effects Log Page: May Support 00:13:12.304 Data Area 4 for Telemetry Log: Not Supported 00:13:12.304 Error Log Page Entries Supported: 128 00:13:12.304 Keep Alive: Supported 00:13:12.304 Keep Alive Granularity: 10000 ms 00:13:12.304 00:13:12.304 NVM Command Set Attributes 00:13:12.304 ========================== 00:13:12.304 Submission Queue Entry Size 00:13:12.304 Max: 64 00:13:12.304 Min: 64 00:13:12.304 Completion Queue Entry Size 00:13:12.304 Max: 16 00:13:12.304 Min: 16 00:13:12.304 Number of Namespaces: 32 00:13:12.304 Compare Command: Supported 00:13:12.304 Write Uncorrectable Command: Not Supported 00:13:12.304 Dataset Management Command: Supported 00:13:12.304 Write Zeroes Command: Supported 00:13:12.304 Set Features Save Field: Not Supported 00:13:12.304 Reservations: Not Supported 00:13:12.304 Timestamp: Not Supported 00:13:12.304 Copy: Supported 00:13:12.304 Volatile Write Cache: Present 00:13:12.304 Atomic Write Unit (Normal): 1 00:13:12.304 Atomic Write Unit (PFail): 1 00:13:12.304 Atomic Compare & Write Unit: 1 00:13:12.304 Fused Compare & Write: Supported 00:13:12.304 Scatter-Gather List 00:13:12.304 SGL Command Set: Supported (Dword aligned) 00:13:12.304 SGL Keyed: Not Supported 00:13:12.304 SGL Bit Bucket Descriptor: Not Supported 00:13:12.304 SGL Metadata Pointer: Not Supported 00:13:12.304 Oversized SGL: Not Supported 00:13:12.304 SGL Metadata Address: Not Supported 00:13:12.304 SGL Offset: Not Supported 00:13:12.304 Transport SGL Data Block: Not Supported 00:13:12.304 Replay Protected Memory Block: Not Supported 00:13:12.304 00:13:12.304 Firmware Slot Information 00:13:12.304 ========================= 00:13:12.304 Active slot: 1 00:13:12.304 Slot 1 Firmware Revision: 24.05 00:13:12.304 00:13:12.304 00:13:12.304 Commands Supported and Effects 00:13:12.304 ============================== 00:13:12.304 Admin Commands 00:13:12.304 -------------- 00:13:12.304 Get Log Page (02h): Supported 00:13:12.304 Identify (06h): Supported 00:13:12.304 Abort (08h): Supported 00:13:12.304 Set Features (09h): Supported 00:13:12.304 Get Features (0Ah): Supported 00:13:12.304 Asynchronous Event Request (0Ch): Supported 00:13:12.304 Keep Alive (18h): Supported 00:13:12.304 I/O Commands 00:13:12.304 ------------ 00:13:12.304 Flush (00h): Supported LBA-Change 00:13:12.304 Write (01h): Supported LBA-Change 00:13:12.304 Read (02h): Supported 00:13:12.304 Compare (05h): Supported 00:13:12.304 Write Zeroes (08h): Supported LBA-Change 00:13:12.304 Dataset Management (09h): Supported LBA-Change 00:13:12.304 Copy (19h): Supported LBA-Change 00:13:12.304 Unknown (79h): Supported LBA-Change 00:13:12.304 Unknown (7Ah): Supported 00:13:12.304 00:13:12.304 Error Log 00:13:12.304 ========= 00:13:12.304 00:13:12.304 Arbitration 00:13:12.304 =========== 00:13:12.304 Arbitration Burst: 1 00:13:12.304 00:13:12.304 Power Management 00:13:12.304 ================ 00:13:12.304 Number of Power States: 1 00:13:12.304 Current Power State: Power State #0 00:13:12.304 Power State #0: 00:13:12.304 Max Power: 0.00 W 00:13:12.304 Non-Operational State: Operational 00:13:12.304 Entry Latency: Not Reported 00:13:12.304 Exit Latency: Not Reported 00:13:12.304 Relative Read Throughput: 0 00:13:12.304 Relative Read Latency: 0 00:13:12.304 Relative Write Throughput: 0 00:13:12.304 Relative Write Latency: 0 00:13:12.304 Idle Power: Not Reported 00:13:12.304 Active Power: Not Reported 00:13:12.304 Non-Operational Permissive Mode: Not Supported 00:13:12.304 00:13:12.304 Health Information 00:13:12.304 ================== 00:13:12.304 Critical Warnings: 00:13:12.304 Available Spare Space: OK 00:13:12.304 Temperature: OK 00:13:12.304 Device Reliability: OK 00:13:12.304 Read Only: No 00:13:12.304 Volatile Memory Backup: OK 00:13:12.304 Current Temperature: 0 Kelvin (-2[2024-04-25 17:16:42.248922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:12.304 [2024-04-25 17:16:42.256736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:12.304 [2024-04-25 17:16:42.256799] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:12.304 [2024-04-25 17:16:42.256813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.304 [2024-04-25 17:16:42.256821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.304 [2024-04-25 17:16:42.256827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.304 [2024-04-25 17:16:42.256834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.304 [2024-04-25 17:16:42.256928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:12.304 [2024-04-25 17:16:42.256946] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:12.304 [2024-04-25 17:16:42.257924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:12.304 [2024-04-25 17:16:42.258035] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:12.304 [2024-04-25 17:16:42.258045] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:12.304 [2024-04-25 17:16:42.258928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:12.304 [2024-04-25 17:16:42.258972] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:12.304 [2024-04-25 17:16:42.259028] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:12.304 [2024-04-25 17:16:42.260304] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.563 73 Celsius) 00:13:12.563 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:12.563 Available Spare: 0% 00:13:12.563 Available Spare Threshold: 0% 00:13:12.563 Life Percentage Used: 0% 00:13:12.563 Data Units Read: 0 00:13:12.563 Data Units Written: 0 00:13:12.563 Host Read Commands: 0 00:13:12.563 Host Write Commands: 0 00:13:12.563 Controller Busy Time: 0 minutes 00:13:12.563 Power Cycles: 0 00:13:12.563 Power On Hours: 0 hours 00:13:12.563 Unsafe Shutdowns: 0 00:13:12.563 Unrecoverable Media Errors: 0 00:13:12.563 Lifetime Error Log Entries: 0 00:13:12.563 Warning Temperature Time: 0 minutes 00:13:12.563 Critical Temperature Time: 0 minutes 00:13:12.563 00:13:12.563 Number of Queues 00:13:12.563 ================ 00:13:12.563 Number of I/O Submission Queues: 127 00:13:12.563 Number of I/O Completion Queues: 127 00:13:12.563 00:13:12.563 Active Namespaces 00:13:12.563 ================= 00:13:12.563 Namespace ID:1 00:13:12.563 Error Recovery Timeout: Unlimited 00:13:12.563 Command Set Identifier: NVM (00h) 00:13:12.563 Deallocate: Supported 00:13:12.563 Deallocated/Unwritten Error: Not Supported 00:13:12.563 Deallocated Read Value: Unknown 00:13:12.563 Deallocate in Write Zeroes: Not Supported 00:13:12.563 Deallocated Guard Field: 0xFFFF 00:13:12.563 Flush: Supported 00:13:12.564 Reservation: Supported 00:13:12.564 Namespace Sharing Capabilities: Multiple Controllers 00:13:12.564 Size (in LBAs): 131072 (0GiB) 00:13:12.564 Capacity (in LBAs): 131072 (0GiB) 00:13:12.564 Utilization (in LBAs): 131072 (0GiB) 00:13:12.564 NGUID: 01C3005E4A6B4BCABF830084165FE3AC 00:13:12.564 UUID: 01c3005e-4a6b-4bca-bf83-0084165fe3ac 00:13:12.564 Thin Provisioning: Not Supported 00:13:12.564 Per-NS Atomic Units: Yes 00:13:12.564 Atomic Boundary Size (Normal): 0 00:13:12.564 Atomic Boundary Size (PFail): 0 00:13:12.564 Atomic Boundary Offset: 0 00:13:12.564 Maximum Single Source Range Length: 65535 00:13:12.564 Maximum Copy Length: 65535 00:13:12.564 Maximum Source Range Count: 1 00:13:12.564 NGUID/EUI64 Never Reused: No 00:13:12.564 Namespace Write Protected: No 00:13:12.564 Number of LBA Formats: 1 00:13:12.564 Current LBA Format: LBA Format #00 00:13:12.564 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:12.564 00:13:12.564 17:16:42 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:12.564 [2024-04-25 17:16:42.540321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.840 [2024-04-25 17:16:47.632054] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.840 Initializing NVMe Controllers 00:13:17.840 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:17.840 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:17.840 Initialization complete. Launching workers. 00:13:17.840 ======================================================== 00:13:17.840 Latency(us) 00:13:17.840 Device Information : IOPS MiB/s Average min max 00:13:17.840 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37094.63 144.90 3450.37 1061.38 10578.30 00:13:17.840 ======================================================== 00:13:17.840 Total : 37094.63 144.90 3450.37 1061.38 10578.30 00:13:17.840 00:13:17.840 17:16:47 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:18.098 [2024-04-25 17:16:47.925707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.373 [2024-04-25 17:16:52.934522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.373 Initializing NVMe Controllers 00:13:23.373 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:23.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:23.373 Initialization complete. Launching workers. 00:13:23.373 ======================================================== 00:13:23.373 Latency(us) 00:13:23.373 Device Information : IOPS MiB/s Average min max 00:13:23.373 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36811.60 143.80 3477.16 1064.84 11254.53 00:13:23.373 ======================================================== 00:13:23.373 Total : 36811.60 143.80 3477.16 1064.84 11254.53 00:13:23.373 00:13:23.373 17:16:52 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:23.373 [2024-04-25 17:16:53.199586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.644 [2024-04-25 17:16:58.328018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.644 Initializing NVMe Controllers 00:13:28.644 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.644 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:28.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:28.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:28.644 Initialization complete. Launching workers. 00:13:28.644 Starting thread on core 2 00:13:28.644 Starting thread on core 3 00:13:28.644 Starting thread on core 1 00:13:28.644 17:16:58 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:28.903 [2024-04-25 17:16:58.622782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:32.193 [2024-04-25 17:17:01.659915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:32.193 Initializing NVMe Controllers 00:13:32.193 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.193 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:32.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:32.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:32.193 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:32.193 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:32.193 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:32.193 Initialization complete. Launching workers. 00:13:32.193 Starting thread on core 1 with urgent priority queue 00:13:32.193 Starting thread on core 2 with urgent priority queue 00:13:32.193 Starting thread on core 3 with urgent priority queue 00:13:32.193 Starting thread on core 0 with urgent priority queue 00:13:32.193 SPDK bdev Controller (SPDK2 ) core 0: 7930.67 IO/s 12.61 secs/100000 ios 00:13:32.193 SPDK bdev Controller (SPDK2 ) core 1: 8131.33 IO/s 12.30 secs/100000 ios 00:13:32.193 SPDK bdev Controller (SPDK2 ) core 2: 7487.00 IO/s 13.36 secs/100000 ios 00:13:32.193 SPDK bdev Controller (SPDK2 ) core 3: 7890.00 IO/s 12.67 secs/100000 ios 00:13:32.193 ======================================================== 00:13:32.193 00:13:32.193 17:17:01 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:32.193 [2024-04-25 17:17:01.958801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:32.193 [2024-04-25 17:17:01.970842] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:32.193 Initializing NVMe Controllers 00:13:32.193 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.193 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.193 Namespace ID: 1 size: 0GB 00:13:32.193 Initialization complete. 00:13:32.193 INFO: using host memory buffer for IO 00:13:32.193 Hello world! 00:13:32.193 17:17:02 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:32.452 [2024-04-25 17:17:02.282407] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.829 Initializing NVMe Controllers 00:13:33.829 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.829 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.829 Initialization complete. Launching workers. 00:13:33.829 submit (in ns) avg, min, max = 6926.1, 3131.8, 4042294.5 00:13:33.829 complete (in ns) avg, min, max = 26483.7, 1969.1, 7086795.5 00:13:33.829 00:13:33.829 Submit histogram 00:13:33.829 ================ 00:13:33.829 Range in us Cumulative Count 00:13:33.829 3.127 - 3.142: 0.0077% ( 1) 00:13:33.829 3.156 - 3.171: 0.0230% ( 2) 00:13:33.829 3.171 - 3.185: 0.0537% ( 4) 00:13:33.829 3.185 - 3.200: 0.0767% ( 3) 00:13:33.829 3.200 - 3.215: 0.1763% ( 13) 00:13:33.829 3.215 - 3.229: 0.3220% ( 19) 00:13:33.829 3.229 - 3.244: 0.5214% ( 26) 00:13:33.829 3.244 - 3.258: 0.6364% ( 15) 00:13:33.829 3.258 - 3.273: 0.9200% ( 37) 00:13:33.829 3.273 - 3.287: 1.3340% ( 54) 00:13:33.829 3.287 - 3.302: 1.9627% ( 82) 00:13:33.829 3.302 - 3.316: 2.9364% ( 127) 00:13:33.829 3.316 - 3.331: 3.9791% ( 136) 00:13:33.829 3.331 - 3.345: 4.9068% ( 121) 00:13:33.829 3.345 - 3.360: 6.0492% ( 149) 00:13:33.829 3.360 - 3.375: 7.2376% ( 155) 00:13:33.829 3.375 - 3.389: 8.2343% ( 130) 00:13:33.829 3.389 - 3.404: 9.7524% ( 198) 00:13:33.829 3.404 - 3.418: 11.0711% ( 172) 00:13:33.829 3.418 - 3.433: 12.3055% ( 161) 00:13:33.829 3.433 - 3.447: 13.7008% ( 182) 00:13:33.829 3.447 - 3.462: 14.6592% ( 125) 00:13:33.829 3.462 - 3.476: 15.8706% ( 158) 00:13:33.829 3.476 - 3.491: 16.9823% ( 145) 00:13:33.829 3.491 - 3.505: 18.5310% ( 202) 00:13:33.829 3.505 - 3.520: 19.8574% ( 173) 00:13:33.829 3.520 - 3.535: 21.5901% ( 226) 00:13:33.829 3.535 - 3.549: 22.8245% ( 161) 00:13:33.829 3.549 - 3.564: 24.0282% ( 157) 00:13:33.829 3.564 - 3.578: 25.3009% ( 166) 00:13:33.829 3.578 - 3.593: 26.5430% ( 162) 00:13:33.829 3.593 - 3.607: 28.0150% ( 192) 00:13:33.829 3.607 - 3.622: 29.3644% ( 176) 00:13:33.829 3.622 - 3.636: 30.8518% ( 194) 00:13:33.829 3.636 - 3.651: 32.4159% ( 204) 00:13:33.829 3.651 - 3.665: 33.7116% ( 169) 00:13:33.829 3.665 - 3.680: 35.0380% ( 173) 00:13:33.829 3.680 - 3.695: 36.9470% ( 249) 00:13:33.829 3.695 - 3.709: 39.4234% ( 323) 00:13:33.829 3.709 - 3.724: 43.0422% ( 472) 00:13:33.829 3.724 - 3.753: 54.5887% ( 1506) 00:13:33.829 3.753 - 3.782: 67.0245% ( 1622) 00:13:33.829 3.782 - 3.811: 78.3332% ( 1475) 00:13:33.829 3.811 - 3.840: 86.1228% ( 1016) 00:13:33.829 3.840 - 3.869: 90.9913% ( 635) 00:13:33.829 3.869 - 3.898: 92.8544% ( 243) 00:13:33.829 3.898 - 3.927: 93.7208% ( 113) 00:13:33.829 3.927 - 3.956: 94.2345% ( 67) 00:13:33.829 3.956 - 3.985: 94.7098% ( 62) 00:13:33.829 3.985 - 4.015: 94.9781% ( 35) 00:13:33.830 4.015 - 4.044: 95.3768% ( 52) 00:13:33.830 4.044 - 4.073: 95.6835% ( 40) 00:13:33.830 4.073 - 4.102: 95.9289% ( 32) 00:13:33.830 4.102 - 4.131: 96.0745% ( 19) 00:13:33.830 4.131 - 4.160: 96.1665% ( 12) 00:13:33.830 4.160 - 4.189: 96.2585% ( 12) 00:13:33.830 4.189 - 4.218: 96.3045% ( 6) 00:13:33.830 4.218 - 4.247: 96.3122% ( 1) 00:13:33.830 4.247 - 4.276: 96.3275% ( 2) 00:13:33.830 4.276 - 4.305: 96.3429% ( 2) 00:13:33.830 4.305 - 4.335: 96.3659% ( 3) 00:13:33.830 4.335 - 4.364: 96.3735% ( 1) 00:13:33.830 4.364 - 4.393: 96.3965% ( 3) 00:13:33.830 4.393 - 4.422: 96.4195% ( 3) 00:13:33.830 4.422 - 4.451: 96.4809% ( 8) 00:13:33.830 4.451 - 4.480: 96.5575% ( 10) 00:13:33.830 4.480 - 4.509: 96.6035% ( 6) 00:13:33.830 4.509 - 4.538: 96.6495% ( 6) 00:13:33.830 4.538 - 4.567: 96.7109% ( 8) 00:13:33.830 4.567 - 4.596: 96.7799% ( 9) 00:13:33.830 4.596 - 4.625: 96.8642% ( 11) 00:13:33.830 4.625 - 4.655: 96.9869% ( 16) 00:13:33.830 4.655 - 4.684: 97.0866% ( 13) 00:13:33.830 4.684 - 4.713: 97.1709% ( 11) 00:13:33.830 4.713 - 4.742: 97.2706% ( 13) 00:13:33.830 4.742 - 4.771: 97.3396% ( 9) 00:13:33.830 4.771 - 4.800: 97.4009% ( 8) 00:13:33.830 4.800 - 4.829: 97.4699% ( 9) 00:13:33.830 4.829 - 4.858: 97.5772% ( 14) 00:13:33.830 4.858 - 4.887: 97.6156% ( 5) 00:13:33.830 4.887 - 4.916: 97.6769% ( 8) 00:13:33.830 4.916 - 4.945: 97.7076% ( 4) 00:13:33.830 4.945 - 4.975: 97.7459% ( 5) 00:13:33.830 4.975 - 5.004: 97.7766% ( 4) 00:13:33.830 5.004 - 5.033: 97.8226% ( 6) 00:13:33.830 5.033 - 5.062: 97.8533% ( 4) 00:13:33.830 5.062 - 5.091: 97.8686% ( 2) 00:13:33.830 5.091 - 5.120: 97.8993% ( 4) 00:13:33.830 5.120 - 5.149: 97.9606% ( 8) 00:13:33.830 5.178 - 5.207: 97.9759% ( 2) 00:13:33.830 5.236 - 5.265: 97.9836% ( 1) 00:13:33.830 5.265 - 5.295: 97.9913% ( 1) 00:13:33.830 5.295 - 5.324: 98.0066% ( 2) 00:13:33.830 5.324 - 5.353: 98.0143% ( 1) 00:13:33.830 5.382 - 5.411: 98.0219% ( 1) 00:13:33.830 5.469 - 5.498: 98.0296% ( 1) 00:13:33.830 5.498 - 5.527: 98.0373% ( 1) 00:13:33.830 5.585 - 5.615: 98.0449% ( 1) 00:13:33.830 5.731 - 5.760: 98.0526% ( 1) 00:13:33.830 5.789 - 5.818: 98.0603% ( 1) 00:13:33.830 5.876 - 5.905: 98.0909% ( 4) 00:13:33.830 5.905 - 5.935: 98.0986% ( 1) 00:13:33.830 5.935 - 5.964: 98.1063% ( 1) 00:13:33.830 5.964 - 5.993: 98.1139% ( 1) 00:13:33.830 6.109 - 6.138: 98.1216% ( 1) 00:13:33.830 6.196 - 6.225: 98.1293% ( 1) 00:13:33.830 6.807 - 6.836: 98.1369% ( 1) 00:13:33.830 7.447 - 7.505: 98.1446% ( 1) 00:13:33.830 7.738 - 7.796: 98.1523% ( 1) 00:13:33.830 7.855 - 7.913: 98.1599% ( 1) 00:13:33.830 7.971 - 8.029: 98.1676% ( 1) 00:13:33.830 8.145 - 8.204: 98.1753% ( 1) 00:13:33.830 8.262 - 8.320: 98.1906% ( 2) 00:13:33.830 8.320 - 8.378: 98.1983% ( 1) 00:13:33.830 8.378 - 8.436: 98.2059% ( 1) 00:13:33.830 8.553 - 8.611: 98.2136% ( 1) 00:13:33.830 8.669 - 8.727: 98.2289% ( 2) 00:13:33.830 8.727 - 8.785: 98.2366% ( 1) 00:13:33.830 8.785 - 8.844: 98.2519% ( 2) 00:13:33.830 8.960 - 9.018: 98.2596% ( 1) 00:13:33.830 9.018 - 9.076: 98.2673% ( 1) 00:13:33.830 9.076 - 9.135: 98.2749% ( 1) 00:13:33.830 9.135 - 9.193: 98.2826% ( 1) 00:13:33.830 9.367 - 9.425: 98.2979% ( 2) 00:13:33.830 9.425 - 9.484: 98.3056% ( 1) 00:13:33.830 9.484 - 9.542: 98.3133% ( 1) 00:13:33.830 9.542 - 9.600: 98.3209% ( 1) 00:13:33.830 9.600 - 9.658: 98.3286% ( 1) 00:13:33.830 9.716 - 9.775: 98.3363% ( 1) 00:13:33.830 9.775 - 9.833: 98.3439% ( 1) 00:13:33.830 9.833 - 9.891: 98.3516% ( 1) 00:13:33.830 9.891 - 9.949: 98.3593% ( 1) 00:13:33.830 9.949 - 10.007: 98.3746% ( 2) 00:13:33.830 10.124 - 10.182: 98.3823% ( 1) 00:13:33.830 10.182 - 10.240: 98.3899% ( 1) 00:13:33.830 10.298 - 10.356: 98.3976% ( 1) 00:13:33.830 10.356 - 10.415: 98.4206% ( 3) 00:13:33.830 10.473 - 10.531: 98.4283% ( 1) 00:13:33.830 10.531 - 10.589: 98.4359% ( 1) 00:13:33.830 10.589 - 10.647: 98.4436% ( 1) 00:13:33.830 10.764 - 10.822: 98.4513% ( 1) 00:13:33.830 10.822 - 10.880: 98.4589% ( 1) 00:13:33.830 11.055 - 11.113: 98.4666% ( 1) 00:13:33.830 11.171 - 11.229: 98.4743% ( 1) 00:13:33.830 11.229 - 11.287: 98.4973% ( 3) 00:13:33.830 11.462 - 11.520: 98.5049% ( 1) 00:13:33.830 11.520 - 11.578: 98.5126% ( 1) 00:13:33.830 12.102 - 12.160: 98.5203% ( 1) 00:13:33.830 12.393 - 12.451: 98.5279% ( 1) 00:13:33.830 12.684 - 12.742: 98.5356% ( 1) 00:13:33.830 12.800 - 12.858: 98.5433% ( 1) 00:13:33.830 13.324 - 13.382: 98.5509% ( 1) 00:13:33.830 13.382 - 13.440: 98.5586% ( 1) 00:13:33.830 13.615 - 13.673: 98.5663% ( 1) 00:13:33.830 13.905 - 13.964: 98.5739% ( 1) 00:13:33.830 14.080 - 14.138: 98.5893% ( 2) 00:13:33.830 14.196 - 14.255: 98.5969% ( 1) 00:13:33.830 14.313 - 14.371: 98.6046% ( 1) 00:13:33.830 14.545 - 14.604: 98.6123% ( 1) 00:13:33.830 14.778 - 14.836: 98.6353% ( 3) 00:13:33.830 14.836 - 14.895: 98.6430% ( 1) 00:13:33.830 14.895 - 15.011: 98.6506% ( 1) 00:13:33.830 16.407 - 16.524: 98.6583% ( 1) 00:13:33.830 16.756 - 16.873: 98.6660% ( 1) 00:13:33.830 17.455 - 17.571: 98.6813% ( 2) 00:13:33.830 17.687 - 17.804: 98.7043% ( 3) 00:13:33.830 17.804 - 17.920: 98.7350% ( 4) 00:13:33.830 17.920 - 18.036: 98.7580% ( 3) 00:13:33.830 18.036 - 18.153: 98.7886% ( 4) 00:13:33.830 18.153 - 18.269: 98.9113% ( 16) 00:13:33.830 18.269 - 18.385: 98.9956% ( 11) 00:13:33.830 18.385 - 18.502: 99.1030% ( 14) 00:13:33.830 18.502 - 18.618: 99.1873% ( 11) 00:13:33.830 18.618 - 18.735: 99.2410% ( 7) 00:13:33.830 18.735 - 18.851: 99.2870% ( 6) 00:13:33.830 18.851 - 18.967: 99.4173% ( 17) 00:13:33.830 18.967 - 19.084: 99.4403% ( 3) 00:13:33.830 19.084 - 19.200: 99.5477% ( 14) 00:13:33.830 19.200 - 19.316: 99.5783% ( 4) 00:13:33.830 19.316 - 19.433: 99.6243% ( 6) 00:13:33.830 19.433 - 19.549: 99.6627% ( 5) 00:13:33.830 19.549 - 19.665: 99.6857% ( 3) 00:13:33.830 19.665 - 19.782: 99.7163% ( 4) 00:13:33.830 19.782 - 19.898: 99.7777% ( 8) 00:13:33.830 19.898 - 20.015: 99.8083% ( 4) 00:13:33.830 20.015 - 20.131: 99.8313% ( 3) 00:13:33.830 20.131 - 20.247: 99.8543% ( 3) 00:13:33.830 20.247 - 20.364: 99.8773% ( 3) 00:13:33.830 20.364 - 20.480: 99.8850% ( 1) 00:13:33.830 20.480 - 20.596: 99.8927% ( 1) 00:13:33.830 20.596 - 20.713: 99.9080% ( 2) 00:13:33.830 23.156 - 23.273: 99.9157% ( 1) 00:13:33.830 23.622 - 23.738: 99.9233% ( 1) 00:13:33.830 3038.487 - 3053.382: 99.9310% ( 1) 00:13:33.830 3932.160 - 3961.949: 99.9387% ( 1) 00:13:33.830 3961.949 - 3991.738: 99.9540% ( 2) 00:13:33.830 3991.738 - 4021.527: 99.9770% ( 3) 00:13:33.830 4021.527 - 4051.316: 100.0000% ( 3) 00:13:33.830 00:13:33.830 Complete histogram 00:13:33.830 ================== 00:13:33.830 Range in us Cumulative Count 00:13:33.830 1.964 - 1.978: 0.1687% ( 22) 00:13:33.830 1.978 - 1.993: 1.0197% ( 111) 00:13:33.830 1.993 - 2.007: 1.1270% ( 14) 00:13:33.830 2.007 - 2.022: 1.1424% ( 2) 00:13:33.830 2.022 - 2.036: 1.4491% ( 40) 00:13:33.830 2.036 - 2.051: 7.4829% ( 787) 00:13:33.830 2.051 - 2.065: 9.1850% ( 222) 00:13:33.830 2.065 - 2.080: 9.4610% ( 36) 00:13:33.830 2.080 - 2.095: 9.6910% ( 30) 00:13:33.830 2.095 - 2.109: 12.3898% ( 352) 00:13:33.830 2.109 - 2.124: 17.1893% ( 626) 00:13:33.830 2.124 - 2.138: 17.6110% ( 55) 00:13:33.830 2.138 - 2.153: 17.8027% ( 25) 00:13:33.830 2.153 - 2.167: 18.0710% ( 35) 00:13:33.830 2.167 - 2.182: 21.3678% ( 430) 00:13:33.830 2.182 - 2.196: 27.1410% ( 753) 00:13:33.830 2.196 - 2.211: 27.8387% ( 91) 00:13:33.830 2.211 - 2.225: 28.0380% ( 26) 00:13:33.830 2.225 - 2.240: 28.2297% ( 25) 00:13:33.830 2.240 - 2.255: 30.7061% ( 323) 00:13:33.830 2.255 - 2.269: 37.2920% ( 859) 00:13:33.830 2.269 - 2.284: 38.6414% ( 176) 00:13:33.830 2.284 - 2.298: 38.8331% ( 25) 00:13:33.830 2.298 - 2.313: 39.1014% ( 35) 00:13:33.830 2.313 - 2.327: 40.0061% ( 118) 00:13:33.830 2.327 - 2.342: 60.0399% ( 2613) 00:13:33.830 2.342 - 2.356: 87.9399% ( 3639) 00:13:33.831 2.356 - 2.371: 91.4974% ( 464) 00:13:33.831 2.371 - 2.385: 92.2180% ( 94) 00:13:33.831 2.385 - 2.400: 93.6594% ( 188) 00:13:33.831 2.400 - 2.415: 95.2848% ( 212) 00:13:33.831 2.415 - 2.429: 96.2279% ( 123) 00:13:33.831 2.429 - 2.444: 96.5345% ( 40) 00:13:33.831 2.444 - 2.458: 96.6572% ( 16) 00:13:33.831 2.458 - 2.473: 96.7185% ( 8) 00:13:33.831 2.473 - 2.487: 96.7645% ( 6) 00:13:33.831 2.487 - 2.502: 96.8105% ( 6) 00:13:33.831 2.502 - 2.516: 96.8566% ( 6) 00:13:33.831 2.516 - 2.531: 96.8719% ( 2) 00:13:33.831 2.531 - 2.545: 96.8949% ( 3) 00:13:33.831 2.545 - 2.560: 96.9026% ( 1) 00:13:33.831 2.560 - 2.575: 96.9102% ( 1) 00:13:33.831 2.575 - 2.589: 96.9332% ( 3) 00:13:33.831 2.604 - 2.618: 96.9409% ( 1) 00:13:33.831 2.618 - 2.633: 96.9562% ( 2) 00:13:33.831 2.633 - 2.647: 96.9716% ( 2) 00:13:33.831 2.662 - 2.676: 96.9946% ( 3) 00:13:33.831 2.676 - 2.691: 97.0176% ( 3) 00:13:33.831 2.691 - 2.705: 97.0252% ( 1) 00:13:33.831 2.705 - 2.720: 97.0406% ( 2) 00:13:33.831 2.720 - 2.735: 97.0712% ( 4) 00:13:33.831 2.735 - 2.749: 97.0789% ( 1) 00:13:33.831 2.749 - 2.764: 97.1019% ( 3) 00:13:33.831 2.764 - 2.778: 97.1249% ( 3) 00:13:33.831 2.778 - 2.793: 97.1326% ( 1) 00:13:33.831 2.822 - 2.836: 97.1479% ( 2) 00:13:33.831 2.836 - 2.851: 97.1862% ( 5) 00:13:33.831 2.851 - 2.865: 97.2092% ( 3) 00:13:33.831 2.865 - 2.880: 97.2476% ( 5) 00:13:33.831 2.880 - 2.895: 97.2782% ( 4) 00:13:33.831 2.895 - 2.909: 97.2859% ( 1) 00:13:33.831 2.909 - 2.924: 97.3626% ( 10) 00:13:33.831 2.924 - 2.938: 97.4316% ( 9) 00:13:33.831 2.938 - 2.953: 97.4852% ( 7) 00:13:33.831 2.953 - 2.967: 97.5542% ( 9) 00:13:33.831 2.967 - 2.982: 97.5849% ( 4) 00:13:33.831 2.982 - 2.996: 97.6309% ( 6) 00:13:33.831 2.996 - 3.011: 97.6769% ( 6) 00:13:33.831 3.011 - 3.025: 97.7152% ( 5) 00:13:33.831 3.025 - 3.040: 97.7613% ( 6) 00:13:33.831 3.040 - 3.055: 97.8149% ( 7) 00:13:33.831 3.055 - 3.069: 97.8226% ( 1) 00:13:33.831 3.069 - 3.084: 97.8303% ( 1) 00:13:33.831 3.084 - 3.098: 97.8379% ( 1) 00:13:33.831 3.098 - 3.113: 97.8533% ( 2) 00:13:33.831 3.113 - 3.127: 97.8916% ( 5) 00:13:33.831 3.156 - 3.171: 97.9146% ( 3) 00:13:33.831 3.171 - 3.185: 97.9299% ( 2) 00:13:33.831 3.185 - 3.200: 97.9376% ( 1) 00:13:33.831 3.200 - 3.215: 97.9453% ( 1) 00:13:33.831 3.229 - 3.244: 97.9529% ( 1) 00:13:33.831 3.273 - 3.287: 97.9606% ( 1) 00:13:33.831 3.302 - 3.316: 97.9759% ( 2) 00:13:33.831 3.331 - 3.345: 97.9836% ( 1) 00:13:33.831 3.345 - 3.360: 97.9913% ( 1) 00:13:33.831 3.447 - 3.462: 97.9989% ( 1) 00:13:33.831 3.811 - 3.840: 98.0066% ( 1) 00:13:33.831 3.869 - 3.898: 98.0143% ( 1) 00:13:33.831 3.898 - 3.927: 98.0219% ( 1) 00:13:33.831 4.073 - 4.102: 98.0296% ( 1) 00:13:33.831 4.102 - 4.131: 98.0373% ( 1) 00:13:33.831 4.131 - 4.160: 98.0449% ( 1) 00:13:33.831 4.189 - 4.218: 98.0679% ( 3) 00:13:33.831 4.218 - 4.247: 98.0756% ( 1) 00:13:33.831 4.247 - 4.276: 98.0833% ( 1) 00:13:33.831 4.276 - 4.305: 98.0909% ( 1) 00:13:33.831 4.305 - 4.335: 98.0986% ( 1) 00:13:33.831 4.335 - 4.364: 98.1446% ( 6) 00:13:33.831 4.393 - 4.422: 98.1599% ( 2) 00:13:33.831 4.422 - 4.451: 98.1753% ( 2) 00:13:33.831 4.451 - 4.480: 98.1829% ( 1) 00:13:33.831 4.480 - 4.509: 98.1983% ( 2) 00:13:33.831 4.509 - 4.538: 98.2059% ( 1) 00:13:33.831 4.538 - 4.567: 98.2136% ( 1) 00:13:33.831 4.596 - 4.625: 98.2213% ( 1) 00:13:33.831 4.684 - 4.713: 98.2289% ( 1) 00:13:33.831 4.742 - 4.771: 98.2366% ( 1) 00:13:33.831 4.771 - 4.800: 98.2443% ( 1) 00:13:33.831 4.800 - 4.829: 98.2519% ( 1) 00:13:33.831 4.829 - 4.858: 98.2596% ( 1) 00:13:33.831 4.887 - 4.916: 98.2673% ( 1) 00:13:33.831 5.295 - 5.324: 98.2749% ( 1) 00:13:33.831 5.527 - 5.556: 98.2826% ( 1) 00:13:33.831 6.022 - 6.051: 98.2903% ( 1) 00:13:33.831 6.807 - 6.836: 98.2979% ( 1) 00:13:33.831 7.913 - 7.971: 98.3056% ( 1) 00:13:33.831 7.971 - 8.029: 98.3209% ( 2) 00:13:33.831 8.145 - 8.204: 98.3286% ( 1) 00:13:33.831 8.320 - 8.378: 98.3363% ( 1) 00:13:33.831 8.378 - 8.436: 98.3516% ( 2) 00:13:33.831 8.785 - 8.844: 98.3669% ( 2) 00:13:33.831 8.844 - 8.902: 98.3746% ( 1) 00:13:33.831 8.902 - 8.960: 98.3823% ( 1) 00:13:33.831 8.960 - 9.018: 98.3976% ( 2) 00:13:33.831 9.076 - 9.135: 98.4206% ( 3) 00:13:33.831 9.193 - 9.251: 98.4283% ( 1) 00:13:33.831 9.309 - 9.367: 98.4359% ( 1) 00:13:33.831 9.425 - 9.484: 98.4436% ( 1) 00:13:33.831 9.542 - 9.600: 98.4513% ( 1) 00:13:33.831 9.833 - 9.891: 98.4589% ( 1) 00:13:33.831 9.891 - 9.949: 98.4666% ( 1) 00:13:33.831 10.065 - 10.124: 98.4743% ( 1) 00:13:33.831 11.578 - 11.636: 98.4819% ( 1) 00:13:33.831 12.625 - 12.684: 98.4896% ( 1) 00:13:33.831 12.684 - 12.742: 98.4973% ( 1) 00:13:33.831 12.800 - 12.858: 98.5049% ( 1) 00:13:33.831 13.149 - 13.207: 98.5126% ( 1) 00:13:33.831 13.265 - 13.324: 98.5203% ( 1) 00:13:33.831 13.382 - 13.440: 98.5433% ( 3) 00:13:33.831 13.440 - 13.498: 98.5509% ( 1) 00:13:33.831 13.498 - 13.556: 98.5663%[2024-04-25 17:17:03.379157] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.831 ( 2) 00:13:33.831 13.731 - 13.789: 98.5816% ( 2) 00:13:33.831 13.789 - 13.847: 98.5893% ( 1) 00:13:33.831 14.545 - 14.604: 98.5969% ( 1) 00:13:33.831 15.127 - 15.244: 98.6046% ( 1) 00:13:33.831 16.175 - 16.291: 98.6123% ( 1) 00:13:33.831 16.407 - 16.524: 98.6276% ( 2) 00:13:33.831 16.524 - 16.640: 98.6583% ( 4) 00:13:33.831 16.640 - 16.756: 98.7196% ( 8) 00:13:33.831 16.756 - 16.873: 98.7886% ( 9) 00:13:33.831 16.873 - 16.989: 98.8270% ( 5) 00:13:33.831 16.989 - 17.105: 98.8576% ( 4) 00:13:33.831 17.105 - 17.222: 98.9266% ( 9) 00:13:33.831 17.222 - 17.338: 98.9726% ( 6) 00:13:33.831 17.338 - 17.455: 98.9803% ( 1) 00:13:33.831 17.455 - 17.571: 98.9956% ( 2) 00:13:33.831 17.571 - 17.687: 99.0340% ( 5) 00:13:33.831 17.687 - 17.804: 99.0800% ( 6) 00:13:33.831 17.804 - 17.920: 99.1183% ( 5) 00:13:33.831 17.920 - 18.036: 99.1720% ( 7) 00:13:33.831 18.036 - 18.153: 99.2026% ( 4) 00:13:33.831 18.153 - 18.269: 99.2333% ( 4) 00:13:33.831 18.269 - 18.385: 99.2793% ( 6) 00:13:33.831 18.385 - 18.502: 99.3176% ( 5) 00:13:33.831 18.502 - 18.618: 99.3483% ( 4) 00:13:33.831 18.735 - 18.851: 99.3560% ( 1) 00:13:33.831 18.851 - 18.967: 99.3636% ( 1) 00:13:33.831 18.967 - 19.084: 99.3713% ( 1) 00:13:33.831 19.200 - 19.316: 99.3790% ( 1) 00:13:33.831 19.549 - 19.665: 99.3943% ( 2) 00:13:33.831 19.665 - 19.782: 99.4020% ( 1) 00:13:33.831 19.782 - 19.898: 99.4096% ( 1) 00:13:33.831 3023.593 - 3038.487: 99.4403% ( 4) 00:13:33.831 3038.487 - 3053.382: 99.4633% ( 3) 00:13:33.831 3053.382 - 3068.276: 99.4786% ( 2) 00:13:33.831 3872.582 - 3902.371: 99.4863% ( 1) 00:13:33.831 3902.371 - 3932.160: 99.5016% ( 2) 00:13:33.831 3932.160 - 3961.949: 99.5246% ( 3) 00:13:33.831 3961.949 - 3991.738: 99.6090% ( 11) 00:13:33.831 3991.738 - 4021.527: 99.8313% ( 29) 00:13:33.831 4021.527 - 4051.316: 99.9157% ( 11) 00:13:33.831 4051.316 - 4081.105: 99.9617% ( 6) 00:13:33.831 6047.185 - 6076.975: 99.9693% ( 1) 00:13:33.831 6970.647 - 7000.436: 99.9770% ( 1) 00:13:33.831 7000.436 - 7030.225: 99.9847% ( 1) 00:13:33.831 7060.015 - 7089.804: 100.0000% ( 2) 00:13:33.831 00:13:33.831 17:17:03 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:33.831 17:17:03 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:33.831 17:17:03 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:33.831 17:17:03 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:33.831 17:17:03 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.831 [ 00:13:33.831 { 00:13:33.831 "allow_any_host": true, 00:13:33.831 "hosts": [], 00:13:33.831 "listen_addresses": [], 00:13:33.831 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.831 "subtype": "Discovery" 00:13:33.831 }, 00:13:33.831 { 00:13:33.831 "allow_any_host": true, 00:13:33.831 "hosts": [], 00:13:33.831 "listen_addresses": [ 00:13:33.831 { 00:13:33.831 "adrfam": "IPv4", 00:13:33.831 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.831 "transport": "VFIOUSER", 00:13:33.831 "trsvcid": "0", 00:13:33.831 "trtype": "VFIOUSER" 00:13:33.831 } 00:13:33.831 ], 00:13:33.831 "max_cntlid": 65519, 00:13:33.831 "max_namespaces": 32, 00:13:33.831 "min_cntlid": 1, 00:13:33.831 "model_number": "SPDK bdev Controller", 00:13:33.832 "namespaces": [ 00:13:33.832 { 00:13:33.832 "bdev_name": "Malloc1", 00:13:33.832 "name": "Malloc1", 00:13:33.832 "nguid": "EBC17231AB2C4D3D86F978C3D2A1D4A4", 00:13:33.832 "nsid": 1, 00:13:33.832 "uuid": "ebc17231-ab2c-4d3d-86f9-78c3d2a1d4a4" 00:13:33.832 }, 00:13:33.832 { 00:13:33.832 "bdev_name": "Malloc3", 00:13:33.832 "name": "Malloc3", 00:13:33.832 "nguid": "6483EA0687C84E3D84A5F39B39461DBB", 00:13:33.832 "nsid": 2, 00:13:33.832 "uuid": "6483ea06-87c8-4e3d-84a5-f39b39461dbb" 00:13:33.832 } 00:13:33.832 ], 00:13:33.832 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.832 "serial_number": "SPDK1", 00:13:33.832 "subtype": "NVMe" 00:13:33.832 }, 00:13:33.832 { 00:13:33.832 "allow_any_host": true, 00:13:33.832 "hosts": [], 00:13:33.832 "listen_addresses": [ 00:13:33.832 { 00:13:33.832 "adrfam": "IPv4", 00:13:33.832 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.832 "transport": "VFIOUSER", 00:13:33.832 "trsvcid": "0", 00:13:33.832 "trtype": "VFIOUSER" 00:13:33.832 } 00:13:33.832 ], 00:13:33.832 "max_cntlid": 65519, 00:13:33.832 "max_namespaces": 32, 00:13:33.832 "min_cntlid": 1, 00:13:33.832 "model_number": "SPDK bdev Controller", 00:13:33.832 "namespaces": [ 00:13:33.832 { 00:13:33.832 "bdev_name": "Malloc2", 00:13:33.832 "name": "Malloc2", 00:13:33.832 "nguid": "01C3005E4A6B4BCABF830084165FE3AC", 00:13:33.832 "nsid": 1, 00:13:33.832 "uuid": "01c3005e-4a6b-4bca-bf83-0084165fe3ac" 00:13:33.832 } 00:13:33.832 ], 00:13:33.832 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.832 "serial_number": "SPDK2", 00:13:33.832 "subtype": "NVMe" 00:13:33.832 } 00:13:33.832 ] 00:13:34.121 17:17:03 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:34.121 17:17:03 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:34.121 17:17:03 -- target/nvmf_vfio_user.sh@34 -- # aerpid=75378 00:13:34.121 17:17:03 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:34.121 17:17:03 -- common/autotest_common.sh@1251 -- # local i=0 00:13:34.121 17:17:03 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.121 17:17:03 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:13:34.121 17:17:03 -- common/autotest_common.sh@1254 -- # i=1 00:13:34.121 17:17:03 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:13:34.121 17:17:03 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.121 17:17:03 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:13:34.121 17:17:03 -- common/autotest_common.sh@1254 -- # i=2 00:13:34.121 17:17:03 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:13:34.121 17:17:04 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.121 17:17:04 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:13:34.121 17:17:04 -- common/autotest_common.sh@1254 -- # i=3 00:13:34.121 17:17:04 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:13:34.121 [2024-04-25 17:17:04.026151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:34.379 17:17:04 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.379 17:17:04 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:34.379 17:17:04 -- common/autotest_common.sh@1262 -- # return 0 00:13:34.379 17:17:04 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:34.379 17:17:04 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:34.637 Malloc4 00:13:34.638 17:17:04 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:34.896 [2024-04-25 17:17:04.659682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.896 17:17:04 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:34.896 Asynchronous Event Request test 00:13:34.896 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.896 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:34.896 Registering asynchronous event callbacks... 00:13:34.896 Starting namespace attribute notice tests for all controllers... 00:13:34.896 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:34.896 aer_cb - Changed Namespace 00:13:34.896 Cleaning up... 00:13:35.155 [ 00:13:35.155 { 00:13:35.155 "allow_any_host": true, 00:13:35.155 "hosts": [], 00:13:35.155 "listen_addresses": [], 00:13:35.155 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:35.155 "subtype": "Discovery" 00:13:35.155 }, 00:13:35.155 { 00:13:35.155 "allow_any_host": true, 00:13:35.155 "hosts": [], 00:13:35.155 "listen_addresses": [ 00:13:35.155 { 00:13:35.155 "adrfam": "IPv4", 00:13:35.155 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:35.155 "transport": "VFIOUSER", 00:13:35.155 "trsvcid": "0", 00:13:35.155 "trtype": "VFIOUSER" 00:13:35.155 } 00:13:35.155 ], 00:13:35.155 "max_cntlid": 65519, 00:13:35.155 "max_namespaces": 32, 00:13:35.155 "min_cntlid": 1, 00:13:35.155 "model_number": "SPDK bdev Controller", 00:13:35.155 "namespaces": [ 00:13:35.155 { 00:13:35.155 "bdev_name": "Malloc1", 00:13:35.155 "name": "Malloc1", 00:13:35.155 "nguid": "EBC17231AB2C4D3D86F978C3D2A1D4A4", 00:13:35.155 "nsid": 1, 00:13:35.155 "uuid": "ebc17231-ab2c-4d3d-86f9-78c3d2a1d4a4" 00:13:35.155 }, 00:13:35.155 { 00:13:35.155 "bdev_name": "Malloc3", 00:13:35.155 "name": "Malloc3", 00:13:35.155 "nguid": "6483EA0687C84E3D84A5F39B39461DBB", 00:13:35.155 "nsid": 2, 00:13:35.155 "uuid": "6483ea06-87c8-4e3d-84a5-f39b39461dbb" 00:13:35.155 } 00:13:35.155 ], 00:13:35.155 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:35.155 "serial_number": "SPDK1", 00:13:35.155 "subtype": "NVMe" 00:13:35.155 }, 00:13:35.155 { 00:13:35.155 "allow_any_host": true, 00:13:35.155 "hosts": [], 00:13:35.155 "listen_addresses": [ 00:13:35.155 { 00:13:35.155 "adrfam": "IPv4", 00:13:35.155 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:35.155 "transport": "VFIOUSER", 00:13:35.155 "trsvcid": "0", 00:13:35.155 "trtype": "VFIOUSER" 00:13:35.155 } 00:13:35.155 ], 00:13:35.155 "max_cntlid": 65519, 00:13:35.155 "max_namespaces": 32, 00:13:35.155 "min_cntlid": 1, 00:13:35.155 "model_number": "SPDK bdev Controller", 00:13:35.155 "namespaces": [ 00:13:35.155 { 00:13:35.155 "bdev_name": "Malloc2", 00:13:35.155 "name": "Malloc2", 00:13:35.155 "nguid": "01C3005E4A6B4BCABF830084165FE3AC", 00:13:35.155 "nsid": 1, 00:13:35.155 "uuid": "01c3005e-4a6b-4bca-bf83-0084165fe3ac" 00:13:35.155 }, 00:13:35.155 { 00:13:35.155 "bdev_name": "Malloc4", 00:13:35.155 "name": "Malloc4", 00:13:35.155 "nguid": "18B7812285C64B6F9CB9C0CDA66F8369", 00:13:35.155 "nsid": 2, 00:13:35.155 "uuid": "18b78122-85c6-4b6f-9cb9-c0cda66f8369" 00:13:35.155 } 00:13:35.155 ], 00:13:35.155 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:35.155 "serial_number": "SPDK2", 00:13:35.155 "subtype": "NVMe" 00:13:35.155 } 00:13:35.155 ] 00:13:35.155 17:17:04 -- target/nvmf_vfio_user.sh@44 -- # wait 75378 00:13:35.155 17:17:04 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:35.155 17:17:04 -- target/nvmf_vfio_user.sh@95 -- # killprocess 74697 00:13:35.155 17:17:04 -- common/autotest_common.sh@936 -- # '[' -z 74697 ']' 00:13:35.155 17:17:04 -- common/autotest_common.sh@940 -- # kill -0 74697 00:13:35.155 17:17:04 -- common/autotest_common.sh@941 -- # uname 00:13:35.155 17:17:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.155 17:17:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74697 00:13:35.155 killing process with pid 74697 00:13:35.155 17:17:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.155 17:17:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.155 17:17:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74697' 00:13:35.155 17:17:04 -- common/autotest_common.sh@955 -- # kill 74697 00:13:35.155 [2024-04-25 17:17:04.982135] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:35.155 17:17:04 -- common/autotest_common.sh@960 -- # wait 74697 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=75421 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:35.414 Process pid: 75421 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 75421' 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.414 17:17:05 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 75421 00:13:35.414 17:17:05 -- common/autotest_common.sh@817 -- # '[' -z 75421 ']' 00:13:35.414 17:17:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.414 17:17:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:35.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.414 17:17:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.414 17:17:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:35.414 17:17:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.414 [2024-04-25 17:17:05.289207] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:35.414 [2024-04-25 17:17:05.290402] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:35.414 [2024-04-25 17:17:05.290491] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.673 [2024-04-25 17:17:05.425125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.673 [2024-04-25 17:17:05.487976] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.673 [2024-04-25 17:17:05.488063] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.673 [2024-04-25 17:17:05.488074] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.673 [2024-04-25 17:17:05.488083] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.673 [2024-04-25 17:17:05.488090] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.673 [2024-04-25 17:17:05.488242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.673 [2024-04-25 17:17:05.488386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.673 [2024-04-25 17:17:05.489343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.673 [2024-04-25 17:17:05.489430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.673 [2024-04-25 17:17:05.553961] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:35.673 [2024-04-25 17:17:05.554181] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:35.673 [2024-04-25 17:17:05.554844] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:35.673 [2024-04-25 17:17:05.554871] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:35.673 [2024-04-25 17:17:05.554997] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:35.673 17:17:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:35.673 17:17:05 -- common/autotest_common.sh@850 -- # return 0 00:13:35.673 17:17:05 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:37.047 17:17:06 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:37.304 Malloc1 00:13:37.304 17:17:07 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:37.563 17:17:07 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:37.820 17:17:07 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:38.079 17:17:07 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:38.079 17:17:07 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:38.079 17:17:07 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:38.337 Malloc2 00:13:38.337 17:17:08 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:38.596 17:17:08 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:38.855 17:17:08 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:39.113 17:17:08 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:39.113 17:17:08 -- target/nvmf_vfio_user.sh@95 -- # killprocess 75421 00:13:39.113 17:17:08 -- common/autotest_common.sh@936 -- # '[' -z 75421 ']' 00:13:39.113 17:17:08 -- common/autotest_common.sh@940 -- # kill -0 75421 00:13:39.113 17:17:08 -- common/autotest_common.sh@941 -- # uname 00:13:39.113 17:17:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.113 17:17:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75421 00:13:39.113 killing process with pid 75421 00:13:39.113 17:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:39.113 17:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:39.113 17:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75421' 00:13:39.113 17:17:08 -- common/autotest_common.sh@955 -- # kill 75421 00:13:39.113 17:17:08 -- common/autotest_common.sh@960 -- # wait 75421 00:13:39.372 17:17:09 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:39.372 17:17:09 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:39.372 ************************************ 00:13:39.372 END TEST nvmf_vfio_user 00:13:39.372 ************************************ 00:13:39.372 00:13:39.372 real 0m54.235s 00:13:39.372 user 3m34.255s 00:13:39.372 sys 0m3.707s 00:13:39.372 17:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.372 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.372 17:17:09 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:39.372 17:17:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.372 17:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.372 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.372 ************************************ 00:13:39.372 START TEST nvmf_vfio_user_nvme_compliance 00:13:39.372 ************************************ 00:13:39.372 17:17:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:39.649 * Looking for test storage... 00:13:39.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:39.649 17:17:09 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.649 17:17:09 -- nvmf/common.sh@7 -- # uname -s 00:13:39.649 17:17:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.649 17:17:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.649 17:17:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.649 17:17:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.649 17:17:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.649 17:17:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.649 17:17:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.649 17:17:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.649 17:17:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.649 17:17:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.649 17:17:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:39.649 17:17:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:39.649 17:17:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.649 17:17:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.649 17:17:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.649 17:17:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.649 17:17:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.649 17:17:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.649 17:17:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.649 17:17:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.649 17:17:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.649 17:17:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.649 17:17:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.649 17:17:09 -- paths/export.sh@5 -- # export PATH 00:13:39.649 17:17:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.649 17:17:09 -- nvmf/common.sh@47 -- # : 0 00:13:39.649 17:17:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.649 17:17:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.649 17:17:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.649 17:17:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.649 17:17:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.649 17:17:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.649 17:17:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.649 17:17:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.649 17:17:09 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.649 17:17:09 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.649 17:17:09 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:39.649 17:17:09 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:39.649 17:17:09 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:39.649 Process pid: 75611 00:13:39.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.649 17:17:09 -- compliance/compliance.sh@20 -- # nvmfpid=75611 00:13:39.649 17:17:09 -- compliance/compliance.sh@21 -- # echo 'Process pid: 75611' 00:13:39.649 17:17:09 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:39.649 17:17:09 -- compliance/compliance.sh@24 -- # waitforlisten 75611 00:13:39.649 17:17:09 -- common/autotest_common.sh@817 -- # '[' -z 75611 ']' 00:13:39.649 17:17:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.649 17:17:09 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:39.649 17:17:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.649 17:17:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.649 17:17:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.649 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.649 [2024-04-25 17:17:09.466362] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:39.650 [2024-04-25 17:17:09.466431] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.650 [2024-04-25 17:17:09.598962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:39.908 [2024-04-25 17:17:09.654758] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.908 [2024-04-25 17:17:09.655043] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.908 [2024-04-25 17:17:09.655200] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.908 [2024-04-25 17:17:09.655342] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.908 [2024-04-25 17:17:09.655388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.908 [2024-04-25 17:17:09.655588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.908 [2024-04-25 17:17:09.655830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.908 [2024-04-25 17:17:09.655838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.908 17:17:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.908 17:17:09 -- common/autotest_common.sh@850 -- # return 0 00:13:39.908 17:17:09 -- compliance/compliance.sh@26 -- # sleep 1 00:13:40.844 17:17:10 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:40.844 17:17:10 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:40.844 17:17:10 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:40.844 17:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.844 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 17:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.844 17:17:10 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:40.844 17:17:10 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:40.844 17:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.844 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 malloc0 00:13:40.844 17:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.844 17:17:10 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:40.844 17:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.844 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 17:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.844 17:17:10 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:40.844 17:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.844 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.844 17:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.844 17:17:10 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:40.844 17:17:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.844 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:13:41.103 17:17:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.103 17:17:10 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:41.103 00:13:41.103 00:13:41.103 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.103 http://cunit.sourceforge.net/ 00:13:41.103 00:13:41.103 00:13:41.103 Suite: nvme_compliance 00:13:41.103 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-25 17:17:11.029245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.103 [2024-04-25 17:17:11.030702] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:41.103 [2024-04-25 17:17:11.030869] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:41.103 [2024-04-25 17:17:11.030894] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:41.103 [2024-04-25 17:17:11.032262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.103 passed 00:13:41.362 Test: admin_identify_ctrlr_verify_fused ...[2024-04-25 17:17:11.119754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.362 [2024-04-25 17:17:11.122771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.362 passed 00:13:41.362 Test: admin_identify_ns ...[2024-04-25 17:17:11.215167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.362 [2024-04-25 17:17:11.277853] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:41.362 [2024-04-25 17:17:11.285798] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:41.362 [2024-04-25 17:17:11.306932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.621 passed 00:13:41.621 Test: admin_get_features_mandatory_features ...[2024-04-25 17:17:11.397590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.621 [2024-04-25 17:17:11.400604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.621 passed 00:13:41.621 Test: admin_get_features_optional_features ...[2024-04-25 17:17:11.490045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.621 [2024-04-25 17:17:11.493057] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.621 passed 00:13:41.621 Test: admin_set_features_number_of_queues ...[2024-04-25 17:17:11.577653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.880 [2024-04-25 17:17:11.678929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.880 passed 00:13:41.880 Test: admin_get_log_page_mandatory_logs ...[2024-04-25 17:17:11.765258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.880 [2024-04-25 17:17:11.768285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.880 passed 00:13:41.880 Test: admin_get_log_page_with_lpo ...[2024-04-25 17:17:11.857516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.138 [2024-04-25 17:17:11.924788] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:42.138 [2024-04-25 17:17:11.940888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.138 passed 00:13:42.138 Test: fabric_property_get ...[2024-04-25 17:17:12.026952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.138 [2024-04-25 17:17:12.028335] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:42.138 [2024-04-25 17:17:12.029968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.138 passed 00:13:42.397 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-25 17:17:12.117511] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.397 [2024-04-25 17:17:12.118873] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:42.397 [2024-04-25 17:17:12.120553] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.397 passed 00:13:42.397 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-25 17:17:12.210818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.397 [2024-04-25 17:17:12.292815] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.397 [2024-04-25 17:17:12.307801] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.397 [2024-04-25 17:17:12.313091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.397 passed 00:13:42.656 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-25 17:17:12.403765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.656 [2024-04-25 17:17:12.405101] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:42.656 [2024-04-25 17:17:12.406791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.656 passed 00:13:42.656 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-25 17:17:12.498379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.656 [2024-04-25 17:17:12.573820] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:42.656 [2024-04-25 17:17:12.597828] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.656 [2024-04-25 17:17:12.603094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.914 passed 00:13:42.915 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-25 17:17:12.691264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.915 [2024-04-25 17:17:12.692653] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:42.915 [2024-04-25 17:17:12.692933] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:42.915 [2024-04-25 17:17:12.694288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.915 passed 00:13:42.915 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-25 17:17:12.784384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.915 [2024-04-25 17:17:12.871799] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:42.915 [2024-04-25 17:17:12.879845] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:42.915 [2024-04-25 17:17:12.887881] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:43.174 [2024-04-25 17:17:12.895774] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:43.174 [2024-04-25 17:17:12.922941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.174 passed 00:13:43.174 Test: admin_create_io_sq_verify_pc ...[2024-04-25 17:17:13.011090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.174 [2024-04-25 17:17:13.026831] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:43.174 [2024-04-25 17:17:13.043431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.174 passed 00:13:43.174 Test: admin_create_io_qp_max_qps ...[2024-04-25 17:17:13.128851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.561 [2024-04-25 17:17:14.225795] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:44.820 [2024-04-25 17:17:14.613422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:44.820 passed 00:13:44.820 Test: admin_create_io_sq_shared_cq ...[2024-04-25 17:17:14.701008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.080 [2024-04-25 17:17:14.841805] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:45.080 [2024-04-25 17:17:14.877889] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.080 passed 00:13:45.080 00:13:45.080 Run Summary: Type Total Ran Passed Failed Inactive 00:13:45.080 suites 1 1 n/a 0 0 00:13:45.080 tests 18 18 18 0 0 00:13:45.080 asserts 360 360 360 0 n/a 00:13:45.080 00:13:45.080 Elapsed time = 1.601 seconds 00:13:45.080 17:17:14 -- compliance/compliance.sh@42 -- # killprocess 75611 00:13:45.080 17:17:14 -- common/autotest_common.sh@936 -- # '[' -z 75611 ']' 00:13:45.080 17:17:14 -- common/autotest_common.sh@940 -- # kill -0 75611 00:13:45.080 17:17:14 -- common/autotest_common.sh@941 -- # uname 00:13:45.080 17:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.080 17:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75611 00:13:45.080 17:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:45.080 17:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:45.080 17:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75611' 00:13:45.080 killing process with pid 75611 00:13:45.080 17:17:14 -- common/autotest_common.sh@955 -- # kill 75611 00:13:45.080 17:17:14 -- common/autotest_common.sh@960 -- # wait 75611 00:13:45.339 17:17:15 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:45.339 17:17:15 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:45.339 00:13:45.339 real 0m5.845s 00:13:45.339 user 0m16.350s 00:13:45.339 sys 0m0.467s 00:13:45.339 17:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.339 17:17:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.339 ************************************ 00:13:45.339 END TEST nvmf_vfio_user_nvme_compliance 00:13:45.339 ************************************ 00:13:45.339 17:17:15 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:45.339 17:17:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:45.339 17:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.339 17:17:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.339 ************************************ 00:13:45.339 START TEST nvmf_vfio_user_fuzz 00:13:45.339 ************************************ 00:13:45.339 17:17:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:45.598 * Looking for test storage... 00:13:45.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.598 17:17:15 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.598 17:17:15 -- nvmf/common.sh@7 -- # uname -s 00:13:45.598 17:17:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.598 17:17:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.598 17:17:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.598 17:17:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.598 17:17:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.599 17:17:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.599 17:17:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.599 17:17:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.599 17:17:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.599 17:17:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.599 17:17:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:45.599 17:17:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:45.599 17:17:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.599 17:17:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.599 17:17:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.599 17:17:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.599 17:17:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.599 17:17:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.599 17:17:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.599 17:17:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.599 17:17:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.599 17:17:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.599 17:17:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.599 17:17:15 -- paths/export.sh@5 -- # export PATH 00:13:45.599 17:17:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.599 17:17:15 -- nvmf/common.sh@47 -- # : 0 00:13:45.599 17:17:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.599 17:17:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.599 17:17:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.599 17:17:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.599 17:17:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.599 17:17:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.599 17:17:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.599 17:17:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:45.599 Process pid: 75744 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=75744 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 75744' 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:45.599 17:17:15 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 75744 00:13:45.599 17:17:15 -- common/autotest_common.sh@817 -- # '[' -z 75744 ']' 00:13:45.599 17:17:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.599 17:17:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:45.599 17:17:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.599 17:17:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:45.599 17:17:15 -- common/autotest_common.sh@10 -- # set +x 00:13:46.537 17:17:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:46.537 17:17:16 -- common/autotest_common.sh@850 -- # return 0 00:13:46.537 17:17:16 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:47.474 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.474 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:47.474 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.474 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 malloc0 00:13:47.474 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:47.474 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.474 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:47.474 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.474 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:47.474 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:47.474 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.474 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:47.474 17:17:17 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:48.042 Shutting down the fuzz application 00:13:48.042 17:17:17 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:48.042 17:17:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:48.042 17:17:17 -- common/autotest_common.sh@10 -- # set +x 00:13:48.042 17:17:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:48.042 17:17:17 -- target/vfio_user_fuzz.sh@46 -- # killprocess 75744 00:13:48.042 17:17:17 -- common/autotest_common.sh@936 -- # '[' -z 75744 ']' 00:13:48.042 17:17:17 -- common/autotest_common.sh@940 -- # kill -0 75744 00:13:48.042 17:17:17 -- common/autotest_common.sh@941 -- # uname 00:13:48.042 17:17:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.042 17:17:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75744 00:13:48.042 killing process with pid 75744 00:13:48.042 17:17:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.042 17:17:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.042 17:17:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75744' 00:13:48.042 17:17:17 -- common/autotest_common.sh@955 -- # kill 75744 00:13:48.042 17:17:17 -- common/autotest_common.sh@960 -- # wait 75744 00:13:48.302 17:17:18 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:48.302 17:17:18 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:48.302 00:13:48.302 real 0m2.758s 00:13:48.302 user 0m3.052s 00:13:48.302 sys 0m0.323s 00:13:48.302 ************************************ 00:13:48.302 END TEST nvmf_vfio_user_fuzz 00:13:48.302 ************************************ 00:13:48.302 17:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.302 17:17:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.302 17:17:18 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.302 17:17:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.302 17:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.302 17:17:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.302 ************************************ 00:13:48.302 START TEST nvmf_host_management 00:13:48.302 ************************************ 00:13:48.302 17:17:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.302 * Looking for test storage... 00:13:48.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.302 17:17:18 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.302 17:17:18 -- nvmf/common.sh@7 -- # uname -s 00:13:48.302 17:17:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.302 17:17:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.302 17:17:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.302 17:17:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.302 17:17:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.302 17:17:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.302 17:17:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.302 17:17:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.302 17:17:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.302 17:17:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.302 17:17:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:48.302 17:17:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:48.302 17:17:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.302 17:17:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.302 17:17:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.302 17:17:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.302 17:17:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.302 17:17:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.302 17:17:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.302 17:17:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.302 17:17:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.303 17:17:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.303 17:17:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.303 17:17:18 -- paths/export.sh@5 -- # export PATH 00:13:48.303 17:17:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.303 17:17:18 -- nvmf/common.sh@47 -- # : 0 00:13:48.303 17:17:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.303 17:17:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.303 17:17:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.303 17:17:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.303 17:17:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.303 17:17:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.303 17:17:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.303 17:17:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.303 17:17:18 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.303 17:17:18 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.303 17:17:18 -- target/host_management.sh@105 -- # nvmftestinit 00:13:48.303 17:17:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:48.303 17:17:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.303 17:17:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:48.303 17:17:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:48.303 17:17:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:48.303 17:17:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.303 17:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.303 17:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.303 17:17:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:48.303 17:17:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:48.303 17:17:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:48.303 17:17:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:48.303 17:17:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:48.303 17:17:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:48.303 17:17:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.303 17:17:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.303 17:17:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.303 17:17:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.303 17:17:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.303 17:17:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.303 17:17:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.303 17:17:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.303 17:17:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.303 17:17:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.303 17:17:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.303 17:17:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.303 17:17:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.562 17:17:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.562 Cannot find device "nvmf_tgt_br" 00:13:48.562 17:17:18 -- nvmf/common.sh@155 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.562 Cannot find device "nvmf_tgt_br2" 00:13:48.562 17:17:18 -- nvmf/common.sh@156 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.562 17:17:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.562 Cannot find device "nvmf_tgt_br" 00:13:48.562 17:17:18 -- nvmf/common.sh@158 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.562 Cannot find device "nvmf_tgt_br2" 00:13:48.562 17:17:18 -- nvmf/common.sh@159 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.562 17:17:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.562 17:17:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.562 17:17:18 -- nvmf/common.sh@162 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.562 17:17:18 -- nvmf/common.sh@163 -- # true 00:13:48.562 17:17:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.562 17:17:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.562 17:17:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.562 17:17:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.562 17:17:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.562 17:17:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.562 17:17:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.562 17:17:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:48.562 17:17:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:48.821 17:17:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:48.821 17:17:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:48.821 17:17:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:48.821 17:17:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:48.821 17:17:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.821 17:17:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.821 17:17:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.821 17:17:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:48.821 17:17:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:48.821 17:17:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.821 17:17:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.821 17:17:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.821 17:17:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.821 17:17:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.821 17:17:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:48.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:48.821 00:13:48.821 --- 10.0.0.2 ping statistics --- 00:13:48.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.821 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:48.821 17:17:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:48.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:13:48.821 00:13:48.821 --- 10.0.0.3 ping statistics --- 00:13:48.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.821 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:48.821 17:17:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:48.821 00:13:48.821 --- 10.0.0.1 ping statistics --- 00:13:48.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.821 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:48.821 17:17:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.821 17:17:18 -- nvmf/common.sh@422 -- # return 0 00:13:48.821 17:17:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:48.821 17:17:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.821 17:17:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:48.821 17:17:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:48.821 17:17:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.821 17:17:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:48.821 17:17:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:48.821 17:17:18 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:48.821 17:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:48.821 17:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.821 17:17:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.821 ************************************ 00:13:48.821 START TEST nvmf_host_management 00:13:48.821 ************************************ 00:13:48.821 17:17:18 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:48.821 17:17:18 -- target/host_management.sh@69 -- # starttarget 00:13:48.821 17:17:18 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:48.821 17:17:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:48.821 17:17:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:48.821 17:17:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.822 17:17:18 -- nvmf/common.sh@470 -- # nvmfpid=75987 00:13:48.822 17:17:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:48.822 17:17:18 -- nvmf/common.sh@471 -- # waitforlisten 75987 00:13:48.822 17:17:18 -- common/autotest_common.sh@817 -- # '[' -z 75987 ']' 00:13:48.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.822 17:17:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.822 17:17:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:48.822 17:17:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.822 17:17:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:48.822 17:17:18 -- common/autotest_common.sh@10 -- # set +x 00:13:49.081 [2024-04-25 17:17:18.803355] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:49.081 [2024-04-25 17:17:18.803458] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.081 [2024-04-25 17:17:18.943923] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.081 [2024-04-25 17:17:19.017331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.081 [2024-04-25 17:17:19.017626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.081 [2024-04-25 17:17:19.017939] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.081 [2024-04-25 17:17:19.018114] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.081 [2024-04-25 17:17:19.018298] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.081 [2024-04-25 17:17:19.018525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.081 [2024-04-25 17:17:19.018747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.081 [2024-04-25 17:17:19.018851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.081 [2024-04-25 17:17:19.018850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:50.020 17:17:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:50.020 17:17:19 -- common/autotest_common.sh@850 -- # return 0 00:13:50.020 17:17:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:50.020 17:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:50.020 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.020 17:17:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.020 17:17:19 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.020 17:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.020 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.020 [2024-04-25 17:17:19.883211] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.020 17:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.020 17:17:19 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:50.020 17:17:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:50.020 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.020 17:17:19 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:50.020 17:17:19 -- target/host_management.sh@23 -- # cat 00:13:50.020 17:17:19 -- target/host_management.sh@30 -- # rpc_cmd 00:13:50.020 17:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:50.020 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.020 Malloc0 00:13:50.020 [2024-04-25 17:17:19.957899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.020 17:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:50.020 17:17:19 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:50.020 17:17:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:50.020 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:13:50.280 17:17:20 -- target/host_management.sh@73 -- # perfpid=76059 00:13:50.280 17:17:20 -- target/host_management.sh@74 -- # waitforlisten 76059 /var/tmp/bdevperf.sock 00:13:50.280 17:17:20 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:50.280 17:17:20 -- common/autotest_common.sh@817 -- # '[' -z 76059 ']' 00:13:50.280 17:17:20 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:50.280 17:17:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.280 17:17:20 -- nvmf/common.sh@521 -- # config=() 00:13:50.280 17:17:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:50.280 17:17:20 -- nvmf/common.sh@521 -- # local subsystem config 00:13:50.280 17:17:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:50.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.280 17:17:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.280 17:17:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:50.280 17:17:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:50.280 { 00:13:50.280 "params": { 00:13:50.280 "name": "Nvme$subsystem", 00:13:50.280 "trtype": "$TEST_TRANSPORT", 00:13:50.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.280 "adrfam": "ipv4", 00:13:50.280 "trsvcid": "$NVMF_PORT", 00:13:50.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.280 "hdgst": ${hdgst:-false}, 00:13:50.280 "ddgst": ${ddgst:-false} 00:13:50.280 }, 00:13:50.280 "method": "bdev_nvme_attach_controller" 00:13:50.280 } 00:13:50.280 EOF 00:13:50.280 )") 00:13:50.280 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.280 17:17:20 -- nvmf/common.sh@543 -- # cat 00:13:50.280 17:17:20 -- nvmf/common.sh@545 -- # jq . 00:13:50.280 17:17:20 -- nvmf/common.sh@546 -- # IFS=, 00:13:50.280 17:17:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:50.280 "params": { 00:13:50.280 "name": "Nvme0", 00:13:50.280 "trtype": "tcp", 00:13:50.280 "traddr": "10.0.0.2", 00:13:50.280 "adrfam": "ipv4", 00:13:50.280 "trsvcid": "4420", 00:13:50.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:50.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:50.280 "hdgst": false, 00:13:50.280 "ddgst": false 00:13:50.280 }, 00:13:50.280 "method": "bdev_nvme_attach_controller" 00:13:50.280 }' 00:13:50.280 [2024-04-25 17:17:20.057666] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:50.280 [2024-04-25 17:17:20.057947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76059 ] 00:13:50.280 [2024-04-25 17:17:20.192085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.539 [2024-04-25 17:17:20.261578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.539 Running I/O for 10 seconds... 00:13:51.107 17:17:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.107 17:17:21 -- common/autotest_common.sh@850 -- # return 0 00:13:51.107 17:17:21 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:51.107 17:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.107 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.107 17:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.107 17:17:21 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.107 17:17:21 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:51.107 17:17:21 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:51.107 17:17:21 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:51.107 17:17:21 -- target/host_management.sh@52 -- # local ret=1 00:13:51.107 17:17:21 -- target/host_management.sh@53 -- # local i 00:13:51.107 17:17:21 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:51.107 17:17:21 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:51.107 17:17:21 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:51.107 17:17:21 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.107 17:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.107 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.368 17:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.368 17:17:21 -- target/host_management.sh@55 -- # read_io_count=963 00:13:51.368 17:17:21 -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:13:51.368 17:17:21 -- target/host_management.sh@59 -- # ret=0 00:13:51.368 17:17:21 -- target/host_management.sh@60 -- # break 00:13:51.368 17:17:21 -- target/host_management.sh@64 -- # return 0 00:13:51.368 17:17:21 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.368 17:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.368 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.368 [2024-04-25 17:17:21.135463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.135674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.135914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.368 [2024-04-25 17:17:21.136796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.136921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83e7f0 is same with the state(5) to be set 00:13:51.369 [2024-04-25 17:17:21.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.369 [2024-04-25 17:17:21.137842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.369 [2024-04-25 17:17:21.137854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.137983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.137993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.370 [2024-04-25 17:17:21.138490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.370 [2024-04-25 17:17:21.138501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cb8d0 is same with the state(5) to be set 00:13:51.370 [2024-04-25 17:17:21.138551] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5cb8d0 was disconnected and freed. reset controller. 00:13:51.370 [2024-04-25 17:17:21.139776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:51.370 17:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.370 17:17:21 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.370 17:17:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.370 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.370 task offset: 0 on job bdev=Nvme0n1 fails 00:13:51.370 00:13:51.370 Latency(us) 00:13:51.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.370 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:51.370 Job: Nvme0n1 ended in about 0.73 seconds with error 00:13:51.370 Verification LBA range: start 0x0 length 0x400 00:13:51.370 Nvme0n1 : 0.73 1407.37 87.96 87.96 0.00 41777.30 6911.07 39321.60 00:13:51.370 =================================================================================================================== 00:13:51.370 Total : 1407.37 87.96 87.96 0.00 41777.30 6911.07 39321.60 00:13:51.370 [2024-04-25 17:17:21.141855] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.370 [2024-04-25 17:17:21.141885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a61b0 (9): Bad file descriptor 00:13:51.370 17:17:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.370 17:17:21 -- target/host_management.sh@87 -- # sleep 1 00:13:51.370 [2024-04-25 17:17:21.150505] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:52.308 17:17:22 -- target/host_management.sh@91 -- # kill -9 76059 00:13:52.308 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (76059) - No such process 00:13:52.308 17:17:22 -- target/host_management.sh@91 -- # true 00:13:52.308 17:17:22 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:52.308 17:17:22 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:52.308 17:17:22 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:52.308 17:17:22 -- nvmf/common.sh@521 -- # config=() 00:13:52.308 17:17:22 -- nvmf/common.sh@521 -- # local subsystem config 00:13:52.308 17:17:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:52.308 17:17:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:52.308 { 00:13:52.308 "params": { 00:13:52.308 "name": "Nvme$subsystem", 00:13:52.308 "trtype": "$TEST_TRANSPORT", 00:13:52.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.308 "adrfam": "ipv4", 00:13:52.308 "trsvcid": "$NVMF_PORT", 00:13:52.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.308 "hdgst": ${hdgst:-false}, 00:13:52.308 "ddgst": ${ddgst:-false} 00:13:52.308 }, 00:13:52.308 "method": "bdev_nvme_attach_controller" 00:13:52.308 } 00:13:52.308 EOF 00:13:52.308 )") 00:13:52.308 17:17:22 -- nvmf/common.sh@543 -- # cat 00:13:52.308 17:17:22 -- nvmf/common.sh@545 -- # jq . 00:13:52.308 17:17:22 -- nvmf/common.sh@546 -- # IFS=, 00:13:52.308 17:17:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:52.308 "params": { 00:13:52.308 "name": "Nvme0", 00:13:52.308 "trtype": "tcp", 00:13:52.308 "traddr": "10.0.0.2", 00:13:52.308 "adrfam": "ipv4", 00:13:52.308 "trsvcid": "4420", 00:13:52.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.308 "hdgst": false, 00:13:52.308 "ddgst": false 00:13:52.308 }, 00:13:52.308 "method": "bdev_nvme_attach_controller" 00:13:52.308 }' 00:13:52.308 [2024-04-25 17:17:22.207979] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:52.308 [2024-04-25 17:17:22.208081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76109 ] 00:13:52.567 [2024-04-25 17:17:22.342659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.567 [2024-04-25 17:17:22.400824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.567 Running I/O for 1 seconds... 00:13:53.945 00:13:53.945 Latency(us) 00:13:53.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.945 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:53.945 Verification LBA range: start 0x0 length 0x400 00:13:53.945 Nvme0n1 : 1.01 1524.91 95.31 0.00 0.00 41113.63 5093.93 37891.72 00:13:53.945 =================================================================================================================== 00:13:53.945 Total : 1524.91 95.31 0.00 0.00 41113.63 5093.93 37891.72 00:13:53.945 17:17:23 -- target/host_management.sh@102 -- # stoptarget 00:13:53.945 17:17:23 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:53.945 17:17:23 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:53.945 17:17:23 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:53.945 17:17:23 -- target/host_management.sh@40 -- # nvmftestfini 00:13:53.945 17:17:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:53.945 17:17:23 -- nvmf/common.sh@117 -- # sync 00:13:53.945 17:17:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.945 17:17:23 -- nvmf/common.sh@120 -- # set +e 00:13:53.945 17:17:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.945 17:17:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.945 rmmod nvme_tcp 00:13:53.945 rmmod nvme_fabrics 00:13:53.945 rmmod nvme_keyring 00:13:53.945 17:17:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.945 17:17:23 -- nvmf/common.sh@124 -- # set -e 00:13:53.945 17:17:23 -- nvmf/common.sh@125 -- # return 0 00:13:53.945 17:17:23 -- nvmf/common.sh@478 -- # '[' -n 75987 ']' 00:13:53.945 17:17:23 -- nvmf/common.sh@479 -- # killprocess 75987 00:13:53.945 17:17:23 -- common/autotest_common.sh@936 -- # '[' -z 75987 ']' 00:13:53.945 17:17:23 -- common/autotest_common.sh@940 -- # kill -0 75987 00:13:53.945 17:17:23 -- common/autotest_common.sh@941 -- # uname 00:13:53.945 17:17:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:53.945 17:17:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75987 00:13:53.945 17:17:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:53.945 17:17:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:53.945 17:17:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75987' 00:13:53.945 killing process with pid 75987 00:13:53.945 17:17:23 -- common/autotest_common.sh@955 -- # kill 75987 00:13:53.945 17:17:23 -- common/autotest_common.sh@960 -- # wait 75987 00:13:54.204 [2024-04-25 17:17:24.045452] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:54.204 17:17:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:54.204 17:17:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:54.204 17:17:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:54.204 17:17:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.204 17:17:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.204 17:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.204 17:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.204 17:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.204 17:17:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:54.204 00:13:54.204 real 0m5.373s 00:13:54.204 user 0m22.757s 00:13:54.204 sys 0m1.048s 00:13:54.204 17:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.204 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.204 ************************************ 00:13:54.204 END TEST nvmf_host_management 00:13:54.204 ************************************ 00:13:54.204 17:17:24 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:54.204 00:13:54.204 real 0m5.996s 00:13:54.204 user 0m22.898s 00:13:54.204 sys 0m1.330s 00:13:54.204 17:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.204 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.204 ************************************ 00:13:54.204 END TEST nvmf_host_management 00:13:54.204 ************************************ 00:13:54.463 17:17:24 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.463 17:17:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:54.463 17:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.463 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.463 ************************************ 00:13:54.463 START TEST nvmf_lvol 00:13:54.463 ************************************ 00:13:54.463 17:17:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.463 * Looking for test storage... 00:13:54.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.463 17:17:24 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.463 17:17:24 -- nvmf/common.sh@7 -- # uname -s 00:13:54.463 17:17:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.463 17:17:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.463 17:17:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.463 17:17:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.463 17:17:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.463 17:17:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.463 17:17:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.463 17:17:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.463 17:17:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.463 17:17:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.463 17:17:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:54.463 17:17:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:13:54.463 17:17:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.463 17:17:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.463 17:17:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.463 17:17:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.463 17:17:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.463 17:17:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.463 17:17:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.463 17:17:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.463 17:17:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.463 17:17:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.463 17:17:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.463 17:17:24 -- paths/export.sh@5 -- # export PATH 00:13:54.463 17:17:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.463 17:17:24 -- nvmf/common.sh@47 -- # : 0 00:13:54.464 17:17:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.464 17:17:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.464 17:17:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.464 17:17:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.464 17:17:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.464 17:17:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.464 17:17:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.464 17:17:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.464 17:17:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:54.464 17:17:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:54.464 17:17:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.464 17:17:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:54.464 17:17:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:54.464 17:17:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:54.464 17:17:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.464 17:17:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.464 17:17:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.464 17:17:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:54.464 17:17:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:54.464 17:17:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:54.464 17:17:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:54.464 17:17:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:54.464 17:17:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:54.464 17:17:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.464 17:17:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.464 17:17:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.464 17:17:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:54.464 17:17:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.464 17:17:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.464 17:17:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.464 17:17:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.464 17:17:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.464 17:17:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.464 17:17:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.464 17:17:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.464 17:17:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:54.464 17:17:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:54.464 Cannot find device "nvmf_tgt_br" 00:13:54.464 17:17:24 -- nvmf/common.sh@155 -- # true 00:13:54.464 17:17:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:54.464 Cannot find device "nvmf_tgt_br2" 00:13:54.464 17:17:24 -- nvmf/common.sh@156 -- # true 00:13:54.464 17:17:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:54.464 17:17:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:54.464 Cannot find device "nvmf_tgt_br" 00:13:54.464 17:17:24 -- nvmf/common.sh@158 -- # true 00:13:54.464 17:17:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:54.722 Cannot find device "nvmf_tgt_br2" 00:13:54.722 17:17:24 -- nvmf/common.sh@159 -- # true 00:13:54.722 17:17:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:54.722 17:17:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:54.722 17:17:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.722 17:17:24 -- nvmf/common.sh@162 -- # true 00:13:54.722 17:17:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.722 17:17:24 -- nvmf/common.sh@163 -- # true 00:13:54.722 17:17:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.722 17:17:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.722 17:17:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.722 17:17:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.722 17:17:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.722 17:17:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.722 17:17:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.723 17:17:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:54.723 17:17:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:54.723 17:17:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:54.723 17:17:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:54.723 17:17:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:54.723 17:17:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:54.723 17:17:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.723 17:17:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.723 17:17:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.723 17:17:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:54.723 17:17:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:54.723 17:17:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.723 17:17:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.723 17:17:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.723 17:17:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.723 17:17:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.723 17:17:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:54.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:54.723 00:13:54.723 --- 10.0.0.2 ping statistics --- 00:13:54.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.723 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:54.723 17:17:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:54.723 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.723 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:54.723 00:13:54.723 --- 10.0.0.3 ping statistics --- 00:13:54.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.723 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:54.723 17:17:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:54.981 00:13:54.981 --- 10.0.0.1 ping statistics --- 00:13:54.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.981 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:54.981 17:17:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.981 17:17:24 -- nvmf/common.sh@422 -- # return 0 00:13:54.981 17:17:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:54.981 17:17:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.981 17:17:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:54.981 17:17:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:54.981 17:17:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.981 17:17:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:54.981 17:17:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:54.981 17:17:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:54.981 17:17:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:54.981 17:17:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:54.981 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.981 17:17:24 -- nvmf/common.sh@470 -- # nvmfpid=76340 00:13:54.981 17:17:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:54.981 17:17:24 -- nvmf/common.sh@471 -- # waitforlisten 76340 00:13:54.981 17:17:24 -- common/autotest_common.sh@817 -- # '[' -z 76340 ']' 00:13:54.982 17:17:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.982 17:17:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.982 17:17:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.982 17:17:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.982 17:17:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.982 [2024-04-25 17:17:24.790793] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:13:54.982 [2024-04-25 17:17:24.790918] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.982 [2024-04-25 17:17:24.929885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.240 [2024-04-25 17:17:24.990224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.240 [2024-04-25 17:17:24.990267] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.240 [2024-04-25 17:17:24.990279] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.240 [2024-04-25 17:17:24.990287] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.240 [2024-04-25 17:17:24.990295] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.240 [2024-04-25 17:17:24.990488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.240 [2024-04-25 17:17:24.990594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.240 [2024-04-25 17:17:24.990600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.807 17:17:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.807 17:17:25 -- common/autotest_common.sh@850 -- # return 0 00:13:55.807 17:17:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:55.807 17:17:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.807 17:17:25 -- common/autotest_common.sh@10 -- # set +x 00:13:56.065 17:17:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.065 17:17:25 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.065 [2024-04-25 17:17:26.038898] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.323 17:17:26 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.582 17:17:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:56.582 17:17:26 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.841 17:17:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:56.841 17:17:26 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:57.100 17:17:26 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:57.360 17:17:27 -- target/nvmf_lvol.sh@29 -- # lvs=76363e60-ab70-4832-8d5d-077851e46976 00:13:57.360 17:17:27 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 76363e60-ab70-4832-8d5d-077851e46976 lvol 20 00:13:57.620 17:17:27 -- target/nvmf_lvol.sh@32 -- # lvol=ff94be87-4c95-4bab-a37b-fd50965fefed 00:13:57.620 17:17:27 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.879 17:17:27 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff94be87-4c95-4bab-a37b-fd50965fefed 00:13:58.138 17:17:27 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.398 [2024-04-25 17:17:28.119597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.398 17:17:28 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.656 17:17:28 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:58.656 17:17:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=76482 00:13:58.656 17:17:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:59.649 17:17:29 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ff94be87-4c95-4bab-a37b-fd50965fefed MY_SNAPSHOT 00:13:59.909 17:17:29 -- target/nvmf_lvol.sh@47 -- # snapshot=e3854af3-553c-4843-b805-fb9c3e483836 00:13:59.909 17:17:29 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ff94be87-4c95-4bab-a37b-fd50965fefed 30 00:14:00.168 17:17:29 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e3854af3-553c-4843-b805-fb9c3e483836 MY_CLONE 00:14:00.427 17:17:30 -- target/nvmf_lvol.sh@49 -- # clone=74da69e7-7a24-4f83-bc0c-ecaa236af9b5 00:14:00.427 17:17:30 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 74da69e7-7a24-4f83-bc0c-ecaa236af9b5 00:14:00.996 17:17:30 -- target/nvmf_lvol.sh@53 -- # wait 76482 00:14:09.114 Initializing NVMe Controllers 00:14:09.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:09.114 Controller IO queue size 128, less than required. 00:14:09.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:09.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:09.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:09.114 Initialization complete. Launching workers. 00:14:09.114 ======================================================== 00:14:09.114 Latency(us) 00:14:09.114 Device Information : IOPS MiB/s Average min max 00:14:09.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10034.50 39.20 12758.74 2053.73 55451.03 00:14:09.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10098.40 39.45 12680.09 1113.01 77372.65 00:14:09.114 ======================================================== 00:14:09.114 Total : 20132.90 78.64 12719.29 1113.01 77372.65 00:14:09.114 00:14:09.114 17:17:38 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:09.114 17:17:38 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ff94be87-4c95-4bab-a37b-fd50965fefed 00:14:09.374 17:17:39 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76363e60-ab70-4832-8d5d-077851e46976 00:14:09.374 17:17:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:09.374 17:17:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:09.374 17:17:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:09.374 17:17:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:09.374 17:17:39 -- nvmf/common.sh@117 -- # sync 00:14:09.634 17:17:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.634 17:17:39 -- nvmf/common.sh@120 -- # set +e 00:14:09.634 17:17:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.634 17:17:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.634 rmmod nvme_tcp 00:14:09.634 rmmod nvme_fabrics 00:14:09.634 rmmod nvme_keyring 00:14:09.634 17:17:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.634 17:17:39 -- nvmf/common.sh@124 -- # set -e 00:14:09.634 17:17:39 -- nvmf/common.sh@125 -- # return 0 00:14:09.634 17:17:39 -- nvmf/common.sh@478 -- # '[' -n 76340 ']' 00:14:09.634 17:17:39 -- nvmf/common.sh@479 -- # killprocess 76340 00:14:09.634 17:17:39 -- common/autotest_common.sh@936 -- # '[' -z 76340 ']' 00:14:09.634 17:17:39 -- common/autotest_common.sh@940 -- # kill -0 76340 00:14:09.634 17:17:39 -- common/autotest_common.sh@941 -- # uname 00:14:09.634 17:17:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:09.634 17:17:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76340 00:14:09.634 17:17:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:09.634 killing process with pid 76340 00:14:09.634 17:17:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:09.634 17:17:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76340' 00:14:09.634 17:17:39 -- common/autotest_common.sh@955 -- # kill 76340 00:14:09.634 17:17:39 -- common/autotest_common.sh@960 -- # wait 76340 00:14:09.893 17:17:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:09.893 17:17:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:09.893 17:17:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:09.893 17:17:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.893 17:17:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.893 17:17:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.893 17:17:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.893 17:17:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.893 17:17:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:09.893 00:14:09.893 real 0m15.443s 00:14:09.893 user 1m5.019s 00:14:09.893 sys 0m3.651s 00:14:09.893 17:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.893 ************************************ 00:14:09.893 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:09.893 END TEST nvmf_lvol 00:14:09.893 ************************************ 00:14:09.893 17:17:39 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:09.893 17:17:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:09.893 17:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.893 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:09.893 ************************************ 00:14:09.893 START TEST nvmf_lvs_grow 00:14:09.893 ************************************ 00:14:09.893 17:17:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:10.152 * Looking for test storage... 00:14:10.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.152 17:17:39 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.152 17:17:39 -- nvmf/common.sh@7 -- # uname -s 00:14:10.152 17:17:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.152 17:17:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.152 17:17:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.152 17:17:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.152 17:17:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.152 17:17:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.152 17:17:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.152 17:17:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.152 17:17:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.152 17:17:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:10.152 17:17:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:10.152 17:17:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.152 17:17:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.152 17:17:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.152 17:17:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.152 17:17:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.152 17:17:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.152 17:17:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.152 17:17:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.152 17:17:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.152 17:17:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.152 17:17:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.152 17:17:39 -- paths/export.sh@5 -- # export PATH 00:14:10.152 17:17:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.152 17:17:39 -- nvmf/common.sh@47 -- # : 0 00:14:10.152 17:17:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.152 17:17:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.152 17:17:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.152 17:17:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.152 17:17:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.152 17:17:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.152 17:17:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.152 17:17:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.152 17:17:39 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.152 17:17:39 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.152 17:17:39 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:10.152 17:17:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:10.152 17:17:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.152 17:17:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:10.152 17:17:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:10.152 17:17:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:10.152 17:17:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.152 17:17:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.152 17:17:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.152 17:17:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:10.152 17:17:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:10.152 17:17:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.152 17:17:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.152 17:17:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.152 17:17:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:10.152 17:17:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.152 17:17:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.152 17:17:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.152 17:17:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.152 17:17:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.152 17:17:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.152 17:17:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.152 17:17:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.152 17:17:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:10.152 17:17:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:10.152 Cannot find device "nvmf_tgt_br" 00:14:10.152 17:17:39 -- nvmf/common.sh@155 -- # true 00:14:10.152 17:17:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.152 Cannot find device "nvmf_tgt_br2" 00:14:10.152 17:17:39 -- nvmf/common.sh@156 -- # true 00:14:10.152 17:17:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:10.152 17:17:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:10.152 Cannot find device "nvmf_tgt_br" 00:14:10.152 17:17:39 -- nvmf/common.sh@158 -- # true 00:14:10.152 17:17:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:10.152 Cannot find device "nvmf_tgt_br2" 00:14:10.152 17:17:39 -- nvmf/common.sh@159 -- # true 00:14:10.152 17:17:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:10.152 17:17:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:10.152 17:17:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.152 17:17:40 -- nvmf/common.sh@162 -- # true 00:14:10.152 17:17:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.152 17:17:40 -- nvmf/common.sh@163 -- # true 00:14:10.152 17:17:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.152 17:17:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.152 17:17:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.152 17:17:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.152 17:17:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.152 17:17:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.152 17:17:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.152 17:17:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.152 17:17:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.152 17:17:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:10.152 17:17:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:10.152 17:17:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:10.153 17:17:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:10.412 17:17:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.412 17:17:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.412 17:17:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.412 17:17:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:10.412 17:17:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:10.412 17:17:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.412 17:17:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.412 17:17:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.412 17:17:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.412 17:17:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.412 17:17:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:10.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:14:10.412 00:14:10.412 --- 10.0.0.2 ping statistics --- 00:14:10.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.412 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:10.412 17:17:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:10.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:10.412 00:14:10.412 --- 10.0.0.3 ping statistics --- 00:14:10.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.412 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:10.412 17:17:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:10.412 00:14:10.412 --- 10.0.0.1 ping statistics --- 00:14:10.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.412 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:10.412 17:17:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.412 17:17:40 -- nvmf/common.sh@422 -- # return 0 00:14:10.412 17:17:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:10.412 17:17:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.412 17:17:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:10.412 17:17:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:10.412 17:17:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.412 17:17:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:10.412 17:17:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:10.412 17:17:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:10.412 17:17:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:10.412 17:17:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:10.412 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 17:17:40 -- nvmf/common.sh@470 -- # nvmfpid=76856 00:14:10.412 17:17:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:10.412 17:17:40 -- nvmf/common.sh@471 -- # waitforlisten 76856 00:14:10.412 17:17:40 -- common/autotest_common.sh@817 -- # '[' -z 76856 ']' 00:14:10.412 17:17:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.412 17:17:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:10.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.412 17:17:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.412 17:17:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:10.412 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 [2024-04-25 17:17:40.299650] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:10.412 [2024-04-25 17:17:40.299759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.672 [2024-04-25 17:17:40.430912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.672 [2024-04-25 17:17:40.485575] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.672 [2024-04-25 17:17:40.485633] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.672 [2024-04-25 17:17:40.485642] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.672 [2024-04-25 17:17:40.485648] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.672 [2024-04-25 17:17:40.485654] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.672 [2024-04-25 17:17:40.485682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.610 17:17:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:11.610 17:17:41 -- common/autotest_common.sh@850 -- # return 0 00:14:11.610 17:17:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:11.610 17:17:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:11.610 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:11.610 17:17:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.610 17:17:41 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.610 [2024-04-25 17:17:41.509187] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.610 17:17:41 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:11.610 17:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:11.610 17:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.610 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:11.869 ************************************ 00:14:11.869 START TEST lvs_grow_clean 00:14:11.869 ************************************ 00:14:11.869 17:17:41 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:11.869 17:17:41 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:12.128 17:17:41 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:12.128 17:17:41 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:12.388 17:17:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:12.388 17:17:42 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:12.388 17:17:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:12.647 17:17:42 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:12.647 17:17:42 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:12.647 17:17:42 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c87cd02b-7415-4d7a-aa60-eb554520fb31 lvol 150 00:14:12.907 17:17:42 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e 00:14:12.907 17:17:42 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:12.907 17:17:42 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:12.907 [2024-04-25 17:17:42.881620] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:12.907 [2024-04-25 17:17:42.881699] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:13.166 true 00:14:13.166 17:17:42 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:13.166 17:17:42 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:13.166 17:17:43 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:13.166 17:17:43 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:13.425 17:17:43 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e 00:14:13.683 17:17:43 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:13.942 [2024-04-25 17:17:43.786246] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.942 17:17:43 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.201 17:17:44 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77024 00:14:14.201 17:17:44 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.201 17:17:44 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.201 17:17:44 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77024 /var/tmp/bdevperf.sock 00:14:14.201 17:17:44 -- common/autotest_common.sh@817 -- # '[' -z 77024 ']' 00:14:14.201 17:17:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.201 17:17:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:14.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.201 17:17:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.201 17:17:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:14.201 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:14:14.201 [2024-04-25 17:17:44.141783] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:14.201 [2024-04-25 17:17:44.141907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77024 ] 00:14:14.459 [2024-04-25 17:17:44.277009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.460 [2024-04-25 17:17:44.329734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.460 17:17:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:14.460 17:17:44 -- common/autotest_common.sh@850 -- # return 0 00:14:14.460 17:17:44 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:15.028 Nvme0n1 00:14:15.028 17:17:44 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:15.028 [ 00:14:15.028 { 00:14:15.028 "aliases": [ 00:14:15.028 "a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e" 00:14:15.028 ], 00:14:15.028 "assigned_rate_limits": { 00:14:15.028 "r_mbytes_per_sec": 0, 00:14:15.028 "rw_ios_per_sec": 0, 00:14:15.028 "rw_mbytes_per_sec": 0, 00:14:15.028 "w_mbytes_per_sec": 0 00:14:15.028 }, 00:14:15.028 "block_size": 4096, 00:14:15.028 "claimed": false, 00:14:15.028 "driver_specific": { 00:14:15.028 "mp_policy": "active_passive", 00:14:15.028 "nvme": [ 00:14:15.028 { 00:14:15.028 "ctrlr_data": { 00:14:15.028 "ana_reporting": false, 00:14:15.028 "cntlid": 1, 00:14:15.028 "firmware_revision": "24.05", 00:14:15.028 "model_number": "SPDK bdev Controller", 00:14:15.028 "multi_ctrlr": true, 00:14:15.028 "oacs": { 00:14:15.028 "firmware": 0, 00:14:15.028 "format": 0, 00:14:15.028 "ns_manage": 0, 00:14:15.028 "security": 0 00:14:15.028 }, 00:14:15.028 "serial_number": "SPDK0", 00:14:15.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.028 "vendor_id": "0x8086" 00:14:15.028 }, 00:14:15.028 "ns_data": { 00:14:15.028 "can_share": true, 00:14:15.028 "id": 1 00:14:15.028 }, 00:14:15.028 "trid": { 00:14:15.028 "adrfam": "IPv4", 00:14:15.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.028 "traddr": "10.0.0.2", 00:14:15.028 "trsvcid": "4420", 00:14:15.028 "trtype": "TCP" 00:14:15.028 }, 00:14:15.028 "vs": { 00:14:15.028 "nvme_version": "1.3" 00:14:15.028 } 00:14:15.028 } 00:14:15.028 ] 00:14:15.028 }, 00:14:15.028 "memory_domains": [ 00:14:15.028 { 00:14:15.028 "dma_device_id": "system", 00:14:15.028 "dma_device_type": 1 00:14:15.028 } 00:14:15.028 ], 00:14:15.028 "name": "Nvme0n1", 00:14:15.028 "num_blocks": 38912, 00:14:15.028 "product_name": "NVMe disk", 00:14:15.028 "supported_io_types": { 00:14:15.028 "abort": true, 00:14:15.028 "compare": true, 00:14:15.028 "compare_and_write": true, 00:14:15.028 "flush": true, 00:14:15.028 "nvme_admin": true, 00:14:15.028 "nvme_io": true, 00:14:15.028 "read": true, 00:14:15.028 "reset": true, 00:14:15.028 "unmap": true, 00:14:15.028 "write": true, 00:14:15.028 "write_zeroes": true 00:14:15.028 }, 00:14:15.028 "uuid": "a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e", 00:14:15.028 "zoned": false 00:14:15.028 } 00:14:15.028 ] 00:14:15.028 17:17:44 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.028 17:17:44 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77052 00:14:15.028 17:17:44 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:15.297 Running I/O for 10 seconds... 00:14:16.246 Latency(us) 00:14:16.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.246 Nvme0n1 : 1.00 7026.00 27.45 0.00 0.00 0.00 0.00 0.00 00:14:16.246 =================================================================================================================== 00:14:16.246 Total : 7026.00 27.45 0.00 0.00 0.00 0.00 0.00 00:14:16.246 00:14:17.182 17:17:46 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:17.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.182 Nvme0n1 : 2.00 6944.00 27.12 0.00 0.00 0.00 0.00 0.00 00:14:17.182 =================================================================================================================== 00:14:17.182 Total : 6944.00 27.12 0.00 0.00 0.00 0.00 0.00 00:14:17.182 00:14:17.441 true 00:14:17.441 17:17:47 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:17.441 17:17:47 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:17.699 17:17:47 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:17.699 17:17:47 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:17.699 17:17:47 -- target/nvmf_lvs_grow.sh@65 -- # wait 77052 00:14:18.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.265 Nvme0n1 : 3.00 6996.00 27.33 0.00 0.00 0.00 0.00 0.00 00:14:18.265 =================================================================================================================== 00:14:18.265 Total : 6996.00 27.33 0.00 0.00 0.00 0.00 0.00 00:14:18.265 00:14:19.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.200 Nvme0n1 : 4.00 6985.50 27.29 0.00 0.00 0.00 0.00 0.00 00:14:19.200 =================================================================================================================== 00:14:19.200 Total : 6985.50 27.29 0.00 0.00 0.00 0.00 0.00 00:14:19.200 00:14:20.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.135 Nvme0n1 : 5.00 6986.40 27.29 0.00 0.00 0.00 0.00 0.00 00:14:20.135 =================================================================================================================== 00:14:20.135 Total : 6986.40 27.29 0.00 0.00 0.00 0.00 0.00 00:14:20.135 00:14:21.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.512 Nvme0n1 : 6.00 7007.33 27.37 0.00 0.00 0.00 0.00 0.00 00:14:21.512 =================================================================================================================== 00:14:21.513 Total : 7007.33 27.37 0.00 0.00 0.00 0.00 0.00 00:14:21.513 00:14:22.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.450 Nvme0n1 : 7.00 7013.29 27.40 0.00 0.00 0.00 0.00 0.00 00:14:22.450 =================================================================================================================== 00:14:22.450 Total : 7013.29 27.40 0.00 0.00 0.00 0.00 0.00 00:14:22.450 00:14:23.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.387 Nvme0n1 : 8.00 7018.25 27.42 0.00 0.00 0.00 0.00 0.00 00:14:23.387 =================================================================================================================== 00:14:23.387 Total : 7018.25 27.42 0.00 0.00 0.00 0.00 0.00 00:14:23.387 00:14:24.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.323 Nvme0n1 : 9.00 7028.89 27.46 0.00 0.00 0.00 0.00 0.00 00:14:24.323 =================================================================================================================== 00:14:24.323 Total : 7028.89 27.46 0.00 0.00 0.00 0.00 0.00 00:14:24.323 00:14:25.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.261 Nvme0n1 : 10.00 7046.20 27.52 0.00 0.00 0.00 0.00 0.00 00:14:25.261 =================================================================================================================== 00:14:25.261 Total : 7046.20 27.52 0.00 0.00 0.00 0.00 0.00 00:14:25.261 00:14:25.261 00:14:25.261 Latency(us) 00:14:25.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.261 Nvme0n1 : 10.01 7049.34 27.54 0.00 0.00 18144.82 8281.37 36938.47 00:14:25.261 =================================================================================================================== 00:14:25.261 Total : 7049.34 27.54 0.00 0.00 18144.82 8281.37 36938.47 00:14:25.261 0 00:14:25.261 17:17:55 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77024 00:14:25.261 17:17:55 -- common/autotest_common.sh@936 -- # '[' -z 77024 ']' 00:14:25.261 17:17:55 -- common/autotest_common.sh@940 -- # kill -0 77024 00:14:25.261 17:17:55 -- common/autotest_common.sh@941 -- # uname 00:14:25.261 17:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:25.261 17:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77024 00:14:25.261 17:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:25.261 17:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:25.261 killing process with pid 77024 00:14:25.261 17:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77024' 00:14:25.261 Received shutdown signal, test time was about 10.000000 seconds 00:14:25.261 00:14:25.261 Latency(us) 00:14:25.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.261 =================================================================================================================== 00:14:25.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.261 17:17:55 -- common/autotest_common.sh@955 -- # kill 77024 00:14:25.261 17:17:55 -- common/autotest_common.sh@960 -- # wait 77024 00:14:25.520 17:17:55 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:25.779 17:17:55 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:25.779 17:17:55 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:25.779 17:17:55 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:25.779 17:17:55 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:25.779 17:17:55 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.039 [2024-04-25 17:17:55.920290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:26.039 17:17:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:26.039 17:17:55 -- common/autotest_common.sh@638 -- # local es=0 00:14:26.039 17:17:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:26.039 17:17:55 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.039 17:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.039 17:17:55 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.039 17:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.039 17:17:55 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.039 17:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.039 17:17:55 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.039 17:17:55 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:26.039 17:17:55 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:26.299 2024/04/25 17:17:56 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c87cd02b-7415-4d7a-aa60-eb554520fb31], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:26.299 request: 00:14:26.299 { 00:14:26.299 "method": "bdev_lvol_get_lvstores", 00:14:26.299 "params": { 00:14:26.299 "uuid": "c87cd02b-7415-4d7a-aa60-eb554520fb31" 00:14:26.299 } 00:14:26.299 } 00:14:26.299 Got JSON-RPC error response 00:14:26.299 GoRPCClient: error on JSON-RPC call 00:14:26.299 17:17:56 -- common/autotest_common.sh@641 -- # es=1 00:14:26.299 17:17:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:26.299 17:17:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:26.299 17:17:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:26.299 17:17:56 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.559 aio_bdev 00:14:26.559 17:17:56 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e 00:14:26.559 17:17:56 -- common/autotest_common.sh@885 -- # local bdev_name=a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e 00:14:26.559 17:17:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:26.559 17:17:56 -- common/autotest_common.sh@887 -- # local i 00:14:26.559 17:17:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:26.559 17:17:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:26.559 17:17:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:26.819 17:17:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e -t 2000 00:14:27.104 [ 00:14:27.104 { 00:14:27.104 "aliases": [ 00:14:27.104 "lvs/lvol" 00:14:27.104 ], 00:14:27.104 "assigned_rate_limits": { 00:14:27.104 "r_mbytes_per_sec": 0, 00:14:27.104 "rw_ios_per_sec": 0, 00:14:27.104 "rw_mbytes_per_sec": 0, 00:14:27.104 "w_mbytes_per_sec": 0 00:14:27.104 }, 00:14:27.104 "block_size": 4096, 00:14:27.104 "claimed": false, 00:14:27.104 "driver_specific": { 00:14:27.104 "lvol": { 00:14:27.104 "base_bdev": "aio_bdev", 00:14:27.104 "clone": false, 00:14:27.104 "esnap_clone": false, 00:14:27.104 "lvol_store_uuid": "c87cd02b-7415-4d7a-aa60-eb554520fb31", 00:14:27.104 "snapshot": false, 00:14:27.104 "thin_provision": false 00:14:27.104 } 00:14:27.104 }, 00:14:27.104 "name": "a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e", 00:14:27.104 "num_blocks": 38912, 00:14:27.104 "product_name": "Logical Volume", 00:14:27.104 "supported_io_types": { 00:14:27.104 "abort": false, 00:14:27.104 "compare": false, 00:14:27.104 "compare_and_write": false, 00:14:27.104 "flush": false, 00:14:27.104 "nvme_admin": false, 00:14:27.104 "nvme_io": false, 00:14:27.104 "read": true, 00:14:27.104 "reset": true, 00:14:27.104 "unmap": true, 00:14:27.104 "write": true, 00:14:27.104 "write_zeroes": true 00:14:27.104 }, 00:14:27.104 "uuid": "a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e", 00:14:27.104 "zoned": false 00:14:27.104 } 00:14:27.104 ] 00:14:27.104 17:17:56 -- common/autotest_common.sh@893 -- # return 0 00:14:27.104 17:17:56 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:27.104 17:17:56 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:27.362 17:17:57 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:27.362 17:17:57 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:27.362 17:17:57 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:27.620 17:17:57 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:27.620 17:17:57 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a05bee3d-36a6-47aa-b07c-dac5ff0c5f6e 00:14:27.620 17:17:57 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c87cd02b-7415-4d7a-aa60-eb554520fb31 00:14:28.187 17:17:57 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:28.187 17:17:58 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.754 ************************************ 00:14:28.754 END TEST lvs_grow_clean 00:14:28.754 ************************************ 00:14:28.754 00:14:28.754 real 0m16.842s 00:14:28.754 user 0m16.188s 00:14:28.754 sys 0m1.928s 00:14:28.754 17:17:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.754 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:28.754 17:17:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.754 17:17:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.754 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.754 ************************************ 00:14:28.754 START TEST lvs_grow_dirty 00:14:28.754 ************************************ 00:14:28.754 17:17:58 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:28.754 17:17:58 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.755 17:17:58 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:28.755 17:17:58 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:29.013 17:17:58 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:29.013 17:17:58 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:29.272 17:17:59 -- target/nvmf_lvs_grow.sh@28 -- # lvs=0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:29.272 17:17:59 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:29.272 17:17:59 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:29.530 17:17:59 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:29.530 17:17:59 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:29.530 17:17:59 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d lvol 150 00:14:29.789 17:17:59 -- target/nvmf_lvs_grow.sh@33 -- # lvol=16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:29.789 17:17:59 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:29.789 17:17:59 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:29.789 [2024-04-25 17:17:59.705445] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:29.789 [2024-04-25 17:17:59.705528] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:29.789 true 00:14:29.789 17:17:59 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:29.789 17:17:59 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:30.048 17:17:59 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:30.048 17:17:59 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:30.308 17:18:00 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:30.567 17:18:00 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.826 17:18:00 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:31.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.085 17:18:00 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:31.085 17:18:00 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=77436 00:14:31.085 17:18:00 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:31.085 17:18:00 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 77436 /var/tmp/bdevperf.sock 00:14:31.085 17:18:00 -- common/autotest_common.sh@817 -- # '[' -z 77436 ']' 00:14:31.085 17:18:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.085 17:18:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.085 17:18:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.085 17:18:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.085 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:14:31.085 [2024-04-25 17:18:00.933376] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:31.085 [2024-04-25 17:18:00.933460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77436 ] 00:14:31.344 [2024-04-25 17:18:01.071070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.344 [2024-04-25 17:18:01.137743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.344 17:18:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:31.344 17:18:01 -- common/autotest_common.sh@850 -- # return 0 00:14:31.344 17:18:01 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:31.603 Nvme0n1 00:14:31.603 17:18:01 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.862 [ 00:14:31.862 { 00:14:31.862 "aliases": [ 00:14:31.862 "16f0336f-0354-4f77-8a5f-4da12b6264d0" 00:14:31.862 ], 00:14:31.862 "assigned_rate_limits": { 00:14:31.862 "r_mbytes_per_sec": 0, 00:14:31.862 "rw_ios_per_sec": 0, 00:14:31.862 "rw_mbytes_per_sec": 0, 00:14:31.862 "w_mbytes_per_sec": 0 00:14:31.862 }, 00:14:31.862 "block_size": 4096, 00:14:31.862 "claimed": false, 00:14:31.862 "driver_specific": { 00:14:31.862 "mp_policy": "active_passive", 00:14:31.862 "nvme": [ 00:14:31.862 { 00:14:31.862 "ctrlr_data": { 00:14:31.862 "ana_reporting": false, 00:14:31.862 "cntlid": 1, 00:14:31.862 "firmware_revision": "24.05", 00:14:31.862 "model_number": "SPDK bdev Controller", 00:14:31.862 "multi_ctrlr": true, 00:14:31.862 "oacs": { 00:14:31.862 "firmware": 0, 00:14:31.862 "format": 0, 00:14:31.862 "ns_manage": 0, 00:14:31.862 "security": 0 00:14:31.862 }, 00:14:31.862 "serial_number": "SPDK0", 00:14:31.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.862 "vendor_id": "0x8086" 00:14:31.862 }, 00:14:31.862 "ns_data": { 00:14:31.862 "can_share": true, 00:14:31.862 "id": 1 00:14:31.862 }, 00:14:31.862 "trid": { 00:14:31.862 "adrfam": "IPv4", 00:14:31.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.862 "traddr": "10.0.0.2", 00:14:31.862 "trsvcid": "4420", 00:14:31.862 "trtype": "TCP" 00:14:31.862 }, 00:14:31.862 "vs": { 00:14:31.862 "nvme_version": "1.3" 00:14:31.862 } 00:14:31.862 } 00:14:31.862 ] 00:14:31.862 }, 00:14:31.862 "memory_domains": [ 00:14:31.862 { 00:14:31.862 "dma_device_id": "system", 00:14:31.862 "dma_device_type": 1 00:14:31.862 } 00:14:31.862 ], 00:14:31.862 "name": "Nvme0n1", 00:14:31.862 "num_blocks": 38912, 00:14:31.862 "product_name": "NVMe disk", 00:14:31.862 "supported_io_types": { 00:14:31.862 "abort": true, 00:14:31.862 "compare": true, 00:14:31.862 "compare_and_write": true, 00:14:31.862 "flush": true, 00:14:31.862 "nvme_admin": true, 00:14:31.862 "nvme_io": true, 00:14:31.862 "read": true, 00:14:31.862 "reset": true, 00:14:31.862 "unmap": true, 00:14:31.862 "write": true, 00:14:31.862 "write_zeroes": true 00:14:31.862 }, 00:14:31.862 "uuid": "16f0336f-0354-4f77-8a5f-4da12b6264d0", 00:14:31.862 "zoned": false 00:14:31.862 } 00:14:31.862 ] 00:14:31.862 17:18:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=77469 00:14:31.862 17:18:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.862 17:18:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.862 Running I/O for 10 seconds... 00:14:33.240 Latency(us) 00:14:33.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.240 Nvme0n1 : 1.00 7512.00 29.34 0.00 0.00 0.00 0.00 0.00 00:14:33.240 =================================================================================================================== 00:14:33.240 Total : 7512.00 29.34 0.00 0.00 0.00 0.00 0.00 00:14:33.240 00:14:33.807 17:18:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:34.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.065 Nvme0n1 : 2.00 7490.00 29.26 0.00 0.00 0.00 0.00 0.00 00:14:34.065 =================================================================================================================== 00:14:34.065 Total : 7490.00 29.26 0.00 0.00 0.00 0.00 0.00 00:14:34.065 00:14:34.065 true 00:14:34.065 17:18:04 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:34.065 17:18:04 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:34.633 17:18:04 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:34.633 17:18:04 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:34.633 17:18:04 -- target/nvmf_lvs_grow.sh@65 -- # wait 77469 00:14:34.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.892 Nvme0n1 : 3.00 7523.67 29.39 0.00 0.00 0.00 0.00 0.00 00:14:34.892 =================================================================================================================== 00:14:34.892 Total : 7523.67 29.39 0.00 0.00 0.00 0.00 0.00 00:14:34.892 00:14:36.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.270 Nvme0n1 : 4.00 7521.75 29.38 0.00 0.00 0.00 0.00 0.00 00:14:36.270 =================================================================================================================== 00:14:36.270 Total : 7521.75 29.38 0.00 0.00 0.00 0.00 0.00 00:14:36.270 00:14:37.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.208 Nvme0n1 : 5.00 7460.40 29.14 0.00 0.00 0.00 0.00 0.00 00:14:37.208 =================================================================================================================== 00:14:37.208 Total : 7460.40 29.14 0.00 0.00 0.00 0.00 0.00 00:14:37.208 00:14:38.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.144 Nvme0n1 : 6.00 7443.17 29.07 0.00 0.00 0.00 0.00 0.00 00:14:38.144 =================================================================================================================== 00:14:38.144 Total : 7443.17 29.07 0.00 0.00 0.00 0.00 0.00 00:14:38.144 00:14:39.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.083 Nvme0n1 : 7.00 7418.43 28.98 0.00 0.00 0.00 0.00 0.00 00:14:39.083 =================================================================================================================== 00:14:39.083 Total : 7418.43 28.98 0.00 0.00 0.00 0.00 0.00 00:14:39.083 00:14:40.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.020 Nvme0n1 : 8.00 7387.38 28.86 0.00 0.00 0.00 0.00 0.00 00:14:40.020 =================================================================================================================== 00:14:40.020 Total : 7387.38 28.86 0.00 0.00 0.00 0.00 0.00 00:14:40.020 00:14:40.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.954 Nvme0n1 : 9.00 7189.22 28.08 0.00 0.00 0.00 0.00 0.00 00:14:40.954 =================================================================================================================== 00:14:40.954 Total : 7189.22 28.08 0.00 0.00 0.00 0.00 0.00 00:14:40.954 00:14:41.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.890 Nvme0n1 : 10.00 7165.70 27.99 0.00 0.00 0.00 0.00 0.00 00:14:41.890 =================================================================================================================== 00:14:41.890 Total : 7165.70 27.99 0.00 0.00 0.00 0.00 0.00 00:14:41.890 00:14:41.890 00:14:41.890 Latency(us) 00:14:41.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.890 Nvme0n1 : 10.01 7173.04 28.02 0.00 0.00 17839.19 7566.43 244032.23 00:14:41.890 =================================================================================================================== 00:14:41.890 Total : 7173.04 28.02 0.00 0.00 17839.19 7566.43 244032.23 00:14:41.890 0 00:14:41.890 17:18:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 77436 00:14:41.890 17:18:11 -- common/autotest_common.sh@936 -- # '[' -z 77436 ']' 00:14:41.890 17:18:11 -- common/autotest_common.sh@940 -- # kill -0 77436 00:14:41.890 17:18:11 -- common/autotest_common.sh@941 -- # uname 00:14:41.890 17:18:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.890 17:18:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77436 00:14:42.149 17:18:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:42.149 17:18:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:42.149 17:18:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77436' 00:14:42.149 killing process with pid 77436 00:14:42.149 17:18:11 -- common/autotest_common.sh@955 -- # kill 77436 00:14:42.149 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.149 00:14:42.149 Latency(us) 00:14:42.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.149 =================================================================================================================== 00:14:42.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.149 17:18:11 -- common/autotest_common.sh@960 -- # wait 77436 00:14:42.149 17:18:12 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.409 17:18:12 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:42.409 17:18:12 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 76856 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@74 -- # wait 76856 00:14:42.668 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 76856 Killed "${NVMF_APP[@]}" "$@" 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:42.668 17:18:12 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:42.668 17:18:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.668 17:18:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.668 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:14:42.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.668 17:18:12 -- nvmf/common.sh@470 -- # nvmfpid=77621 00:14:42.668 17:18:12 -- nvmf/common.sh@471 -- # waitforlisten 77621 00:14:42.668 17:18:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:42.668 17:18:12 -- common/autotest_common.sh@817 -- # '[' -z 77621 ']' 00:14:42.668 17:18:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.668 17:18:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.668 17:18:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.668 17:18:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.668 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:14:42.927 [2024-04-25 17:18:12.670617] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:42.927 [2024-04-25 17:18:12.670887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.927 [2024-04-25 17:18:12.804217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.927 [2024-04-25 17:18:12.853539] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.927 [2024-04-25 17:18:12.853869] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.927 [2024-04-25 17:18:12.854011] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.927 [2024-04-25 17:18:12.854143] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.927 [2024-04-25 17:18:12.854192] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.927 [2024-04-25 17:18:12.854306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.187 17:18:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.187 17:18:12 -- common/autotest_common.sh@850 -- # return 0 00:14:43.187 17:18:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.187 17:18:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.187 17:18:12 -- common/autotest_common.sh@10 -- # set +x 00:14:43.187 17:18:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.187 17:18:12 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.445 [2024-04-25 17:18:13.229393] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:43.445 [2024-04-25 17:18:13.229924] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:43.445 [2024-04-25 17:18:13.230293] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:43.445 17:18:13 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:43.445 17:18:13 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:43.445 17:18:13 -- common/autotest_common.sh@885 -- # local bdev_name=16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:43.445 17:18:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:43.445 17:18:13 -- common/autotest_common.sh@887 -- # local i 00:14:43.445 17:18:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:43.445 17:18:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:43.445 17:18:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.703 17:18:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16f0336f-0354-4f77-8a5f-4da12b6264d0 -t 2000 00:14:43.962 [ 00:14:43.962 { 00:14:43.962 "aliases": [ 00:14:43.962 "lvs/lvol" 00:14:43.962 ], 00:14:43.962 "assigned_rate_limits": { 00:14:43.962 "r_mbytes_per_sec": 0, 00:14:43.962 "rw_ios_per_sec": 0, 00:14:43.962 "rw_mbytes_per_sec": 0, 00:14:43.962 "w_mbytes_per_sec": 0 00:14:43.962 }, 00:14:43.962 "block_size": 4096, 00:14:43.962 "claimed": false, 00:14:43.962 "driver_specific": { 00:14:43.962 "lvol": { 00:14:43.962 "base_bdev": "aio_bdev", 00:14:43.962 "clone": false, 00:14:43.962 "esnap_clone": false, 00:14:43.962 "lvol_store_uuid": "0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d", 00:14:43.962 "snapshot": false, 00:14:43.962 "thin_provision": false 00:14:43.962 } 00:14:43.962 }, 00:14:43.962 "name": "16f0336f-0354-4f77-8a5f-4da12b6264d0", 00:14:43.962 "num_blocks": 38912, 00:14:43.962 "product_name": "Logical Volume", 00:14:43.962 "supported_io_types": { 00:14:43.962 "abort": false, 00:14:43.962 "compare": false, 00:14:43.962 "compare_and_write": false, 00:14:43.962 "flush": false, 00:14:43.962 "nvme_admin": false, 00:14:43.962 "nvme_io": false, 00:14:43.962 "read": true, 00:14:43.962 "reset": true, 00:14:43.962 "unmap": true, 00:14:43.962 "write": true, 00:14:43.962 "write_zeroes": true 00:14:43.962 }, 00:14:43.962 "uuid": "16f0336f-0354-4f77-8a5f-4da12b6264d0", 00:14:43.962 "zoned": false 00:14:43.962 } 00:14:43.962 ] 00:14:43.962 17:18:13 -- common/autotest_common.sh@893 -- # return 0 00:14:43.962 17:18:13 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:43.962 17:18:13 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:44.220 17:18:13 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:44.220 17:18:13 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:44.220 17:18:13 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:44.479 17:18:14 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:44.479 17:18:14 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.739 [2024-04-25 17:18:14.470865] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:44.739 17:18:14 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:44.739 17:18:14 -- common/autotest_common.sh@638 -- # local es=0 00:14:44.739 17:18:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:44.739 17:18:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.739 17:18:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.739 17:18:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.739 17:18:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.739 17:18:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.739 17:18:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.739 17:18:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.739 17:18:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.739 17:18:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:44.739 2024/04/25 17:18:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:44.739 request: 00:14:44.739 { 00:14:44.739 "method": "bdev_lvol_get_lvstores", 00:14:44.739 "params": { 00:14:44.739 "uuid": "0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d" 00:14:44.739 } 00:14:44.739 } 00:14:44.739 Got JSON-RPC error response 00:14:44.739 GoRPCClient: error on JSON-RPC call 00:14:44.998 17:18:14 -- common/autotest_common.sh@641 -- # es=1 00:14:44.998 17:18:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:44.998 17:18:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:44.998 17:18:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:44.998 17:18:14 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.258 aio_bdev 00:14:45.258 17:18:15 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:45.258 17:18:15 -- common/autotest_common.sh@885 -- # local bdev_name=16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:45.258 17:18:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:45.258 17:18:15 -- common/autotest_common.sh@887 -- # local i 00:14:45.258 17:18:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:45.258 17:18:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:45.258 17:18:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.258 17:18:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16f0336f-0354-4f77-8a5f-4da12b6264d0 -t 2000 00:14:45.517 [ 00:14:45.517 { 00:14:45.517 "aliases": [ 00:14:45.517 "lvs/lvol" 00:14:45.517 ], 00:14:45.517 "assigned_rate_limits": { 00:14:45.517 "r_mbytes_per_sec": 0, 00:14:45.517 "rw_ios_per_sec": 0, 00:14:45.517 "rw_mbytes_per_sec": 0, 00:14:45.517 "w_mbytes_per_sec": 0 00:14:45.517 }, 00:14:45.517 "block_size": 4096, 00:14:45.517 "claimed": false, 00:14:45.517 "driver_specific": { 00:14:45.517 "lvol": { 00:14:45.517 "base_bdev": "aio_bdev", 00:14:45.517 "clone": false, 00:14:45.517 "esnap_clone": false, 00:14:45.517 "lvol_store_uuid": "0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d", 00:14:45.517 "snapshot": false, 00:14:45.517 "thin_provision": false 00:14:45.517 } 00:14:45.517 }, 00:14:45.517 "name": "16f0336f-0354-4f77-8a5f-4da12b6264d0", 00:14:45.517 "num_blocks": 38912, 00:14:45.517 "product_name": "Logical Volume", 00:14:45.517 "supported_io_types": { 00:14:45.517 "abort": false, 00:14:45.517 "compare": false, 00:14:45.517 "compare_and_write": false, 00:14:45.517 "flush": false, 00:14:45.517 "nvme_admin": false, 00:14:45.517 "nvme_io": false, 00:14:45.517 "read": true, 00:14:45.517 "reset": true, 00:14:45.517 "unmap": true, 00:14:45.517 "write": true, 00:14:45.517 "write_zeroes": true 00:14:45.517 }, 00:14:45.517 "uuid": "16f0336f-0354-4f77-8a5f-4da12b6264d0", 00:14:45.517 "zoned": false 00:14:45.517 } 00:14:45.517 ] 00:14:45.517 17:18:15 -- common/autotest_common.sh@893 -- # return 0 00:14:45.517 17:18:15 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:45.517 17:18:15 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:45.776 17:18:15 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:45.776 17:18:15 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:45.776 17:18:15 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:46.035 17:18:15 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:46.035 17:18:15 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 16f0336f-0354-4f77-8a5f-4da12b6264d0 00:14:46.294 17:18:16 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cf931a6-bdcd-4e3e-b8c6-6bda6fb6785d 00:14:46.584 17:18:16 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.844 17:18:16 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:47.103 ************************************ 00:14:47.103 END TEST lvs_grow_dirty 00:14:47.103 ************************************ 00:14:47.103 00:14:47.103 real 0m18.462s 00:14:47.103 user 0m37.039s 00:14:47.103 sys 0m9.275s 00:14:47.103 17:18:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.103 17:18:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.103 17:18:17 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:47.103 17:18:17 -- common/autotest_common.sh@794 -- # type=--id 00:14:47.103 17:18:17 -- common/autotest_common.sh@795 -- # id=0 00:14:47.103 17:18:17 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:47.103 17:18:17 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:47.103 17:18:17 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:47.103 17:18:17 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:47.103 17:18:17 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:47.103 17:18:17 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:47.103 nvmf_trace.0 00:14:47.362 17:18:17 -- common/autotest_common.sh@809 -- # return 0 00:14:47.362 17:18:17 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:47.362 17:18:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:47.362 17:18:17 -- nvmf/common.sh@117 -- # sync 00:14:47.362 17:18:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.362 17:18:17 -- nvmf/common.sh@120 -- # set +e 00:14:47.362 17:18:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.362 17:18:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.362 rmmod nvme_tcp 00:14:47.362 rmmod nvme_fabrics 00:14:47.621 rmmod nvme_keyring 00:14:47.621 17:18:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.621 17:18:17 -- nvmf/common.sh@124 -- # set -e 00:14:47.621 17:18:17 -- nvmf/common.sh@125 -- # return 0 00:14:47.621 17:18:17 -- nvmf/common.sh@478 -- # '[' -n 77621 ']' 00:14:47.621 17:18:17 -- nvmf/common.sh@479 -- # killprocess 77621 00:14:47.621 17:18:17 -- common/autotest_common.sh@936 -- # '[' -z 77621 ']' 00:14:47.621 17:18:17 -- common/autotest_common.sh@940 -- # kill -0 77621 00:14:47.621 17:18:17 -- common/autotest_common.sh@941 -- # uname 00:14:47.621 17:18:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.621 17:18:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77621 00:14:47.621 killing process with pid 77621 00:14:47.621 17:18:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:47.621 17:18:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:47.622 17:18:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77621' 00:14:47.622 17:18:17 -- common/autotest_common.sh@955 -- # kill 77621 00:14:47.622 17:18:17 -- common/autotest_common.sh@960 -- # wait 77621 00:14:47.622 17:18:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:47.622 17:18:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:47.622 17:18:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:47.622 17:18:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.622 17:18:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.622 17:18:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.622 17:18:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.622 17:18:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.622 17:18:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:47.881 00:14:47.881 real 0m37.796s 00:14:47.881 user 0m58.734s 00:14:47.881 sys 0m12.003s 00:14:47.881 17:18:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.881 17:18:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.881 ************************************ 00:14:47.881 END TEST nvmf_lvs_grow 00:14:47.881 ************************************ 00:14:47.881 17:18:17 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.881 17:18:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.881 17:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.881 17:18:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.881 ************************************ 00:14:47.881 START TEST nvmf_bdev_io_wait 00:14:47.881 ************************************ 00:14:47.881 17:18:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:47.881 * Looking for test storage... 00:14:47.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.881 17:18:17 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.881 17:18:17 -- nvmf/common.sh@7 -- # uname -s 00:14:47.881 17:18:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.881 17:18:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.881 17:18:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.881 17:18:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.881 17:18:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.881 17:18:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.881 17:18:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.881 17:18:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.881 17:18:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.881 17:18:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.881 17:18:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:47.881 17:18:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:47.881 17:18:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.881 17:18:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.881 17:18:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.881 17:18:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.882 17:18:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.882 17:18:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.882 17:18:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.882 17:18:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.882 17:18:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.882 17:18:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.882 17:18:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.882 17:18:17 -- paths/export.sh@5 -- # export PATH 00:14:47.882 17:18:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.882 17:18:17 -- nvmf/common.sh@47 -- # : 0 00:14:47.882 17:18:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.882 17:18:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.882 17:18:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.882 17:18:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.882 17:18:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.882 17:18:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.882 17:18:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.882 17:18:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.882 17:18:17 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.882 17:18:17 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.882 17:18:17 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:47.882 17:18:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:47.882 17:18:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.882 17:18:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:47.882 17:18:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:47.882 17:18:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:47.882 17:18:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.882 17:18:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.882 17:18:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.882 17:18:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:47.882 17:18:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:47.882 17:18:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:47.882 17:18:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:47.882 17:18:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:47.882 17:18:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:47.882 17:18:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.882 17:18:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.882 17:18:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:47.882 17:18:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:47.882 17:18:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.882 17:18:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.882 17:18:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.882 17:18:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.882 17:18:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.882 17:18:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.882 17:18:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.882 17:18:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.882 17:18:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:47.882 17:18:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:48.142 Cannot find device "nvmf_tgt_br" 00:14:48.142 17:18:17 -- nvmf/common.sh@155 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.142 Cannot find device "nvmf_tgt_br2" 00:14:48.142 17:18:17 -- nvmf/common.sh@156 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:48.142 17:18:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:48.142 Cannot find device "nvmf_tgt_br" 00:14:48.142 17:18:17 -- nvmf/common.sh@158 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:48.142 Cannot find device "nvmf_tgt_br2" 00:14:48.142 17:18:17 -- nvmf/common.sh@159 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:48.142 17:18:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:48.142 17:18:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.142 17:18:17 -- nvmf/common.sh@162 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.142 17:18:17 -- nvmf/common.sh@163 -- # true 00:14:48.142 17:18:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.142 17:18:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.142 17:18:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.142 17:18:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.142 17:18:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.142 17:18:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.142 17:18:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.142 17:18:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:48.142 17:18:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:48.142 17:18:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:48.142 17:18:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:48.142 17:18:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:48.142 17:18:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:48.142 17:18:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.142 17:18:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.142 17:18:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.142 17:18:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:48.142 17:18:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:48.142 17:18:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.142 17:18:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.142 17:18:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.402 17:18:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.402 17:18:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.402 17:18:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:48.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:48.402 00:14:48.402 --- 10.0.0.2 ping statistics --- 00:14:48.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.402 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:48.402 17:18:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:48.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:48.402 00:14:48.402 --- 10.0.0.3 ping statistics --- 00:14:48.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.402 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:48.402 17:18:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:48.402 00:14:48.402 --- 10.0.0.1 ping statistics --- 00:14:48.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.402 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:48.402 17:18:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.402 17:18:18 -- nvmf/common.sh@422 -- # return 0 00:14:48.402 17:18:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:48.402 17:18:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.402 17:18:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:48.402 17:18:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:48.402 17:18:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.402 17:18:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:48.402 17:18:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:48.402 17:18:18 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:48.402 17:18:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:48.402 17:18:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:48.402 17:18:18 -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 17:18:18 -- nvmf/common.sh@470 -- # nvmfpid=78025 00:14:48.402 17:18:18 -- nvmf/common.sh@471 -- # waitforlisten 78025 00:14:48.402 17:18:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:48.402 17:18:18 -- common/autotest_common.sh@817 -- # '[' -z 78025 ']' 00:14:48.402 17:18:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.402 17:18:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.402 17:18:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.402 17:18:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.402 17:18:18 -- common/autotest_common.sh@10 -- # set +x 00:14:48.402 [2024-04-25 17:18:18.243421] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:48.402 [2024-04-25 17:18:18.243500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.661 [2024-04-25 17:18:18.385062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.661 [2024-04-25 17:18:18.459254] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.661 [2024-04-25 17:18:18.459309] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.661 [2024-04-25 17:18:18.459323] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.661 [2024-04-25 17:18:18.459333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.661 [2024-04-25 17:18:18.459342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.661 [2024-04-25 17:18:18.459969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.661 [2024-04-25 17:18:18.460039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.661 [2024-04-25 17:18:18.461430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.661 [2024-04-25 17:18:18.461489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.601 17:18:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.601 17:18:19 -- common/autotest_common.sh@850 -- # return 0 00:14:49.601 17:18:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:49.601 17:18:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 17:18:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 [2024-04-25 17:18:19.336522] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 Malloc0 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.601 17:18:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:49.601 17:18:19 -- common/autotest_common.sh@10 -- # set +x 00:14:49.601 [2024-04-25 17:18:19.382947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.601 17:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78078 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@30 -- # READ_PID=78080 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78082 00:14:49.601 17:18:19 -- nvmf/common.sh@521 -- # config=() 00:14:49.601 17:18:19 -- nvmf/common.sh@521 -- # local subsystem config 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:49.601 17:18:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:49.601 17:18:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:49.601 { 00:14:49.601 "params": { 00:14:49.601 "name": "Nvme$subsystem", 00:14:49.601 "trtype": "$TEST_TRANSPORT", 00:14:49.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.601 "adrfam": "ipv4", 00:14:49.601 "trsvcid": "$NVMF_PORT", 00:14:49.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.601 "hdgst": ${hdgst:-false}, 00:14:49.601 "ddgst": ${ddgst:-false} 00:14:49.601 }, 00:14:49.601 "method": "bdev_nvme_attach_controller" 00:14:49.601 } 00:14:49.601 EOF 00:14:49.601 )") 00:14:49.601 17:18:19 -- nvmf/common.sh@521 -- # config=() 00:14:49.601 17:18:19 -- nvmf/common.sh@521 -- # local subsystem config 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78084 00:14:49.601 17:18:19 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:49.601 17:18:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:49.601 17:18:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:49.601 { 00:14:49.601 "params": { 00:14:49.601 "name": "Nvme$subsystem", 00:14:49.601 "trtype": "$TEST_TRANSPORT", 00:14:49.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.601 "adrfam": "ipv4", 00:14:49.601 "trsvcid": "$NVMF_PORT", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.602 "hdgst": ${hdgst:-false}, 00:14:49.602 "ddgst": ${ddgst:-false} 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 } 00:14:49.602 EOF 00:14:49.602 )") 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # cat 00:14:49.602 17:18:19 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:49.602 17:18:19 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:49.602 17:18:19 -- nvmf/common.sh@521 -- # config=() 00:14:49.602 17:18:19 -- nvmf/common.sh@521 -- # local subsystem config 00:14:49.602 17:18:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:49.602 { 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme$subsystem", 00:14:49.602 "trtype": "$TEST_TRANSPORT", 00:14:49.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "$NVMF_PORT", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.602 "hdgst": ${hdgst:-false}, 00:14:49.602 "ddgst": ${ddgst:-false} 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 } 00:14:49.602 EOF 00:14:49.602 )") 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # cat 00:14:49.602 17:18:19 -- target/bdev_io_wait.sh@35 -- # sync 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # cat 00:14:49.602 17:18:19 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:49.602 17:18:19 -- nvmf/common.sh@521 -- # config=() 00:14:49.602 17:18:19 -- nvmf/common.sh@521 -- # local subsystem config 00:14:49.602 17:18:19 -- nvmf/common.sh@545 -- # jq . 00:14:49.602 17:18:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:49.602 { 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme$subsystem", 00:14:49.602 "trtype": "$TEST_TRANSPORT", 00:14:49.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "$NVMF_PORT", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.602 "hdgst": ${hdgst:-false}, 00:14:49.602 "ddgst": ${ddgst:-false} 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 } 00:14:49.602 EOF 00:14:49.602 )") 00:14:49.602 17:18:19 -- nvmf/common.sh@545 -- # jq . 00:14:49.602 17:18:19 -- nvmf/common.sh@546 -- # IFS=, 00:14:49.602 17:18:19 -- nvmf/common.sh@543 -- # cat 00:14:49.602 17:18:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme1", 00:14:49.602 "trtype": "tcp", 00:14:49.602 "traddr": "10.0.0.2", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "4420", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.602 "hdgst": false, 00:14:49.602 "ddgst": false 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 }' 00:14:49.602 17:18:19 -- nvmf/common.sh@546 -- # IFS=, 00:14:49.602 17:18:19 -- nvmf/common.sh@545 -- # jq . 00:14:49.602 17:18:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme1", 00:14:49.602 "trtype": "tcp", 00:14:49.602 "traddr": "10.0.0.2", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "4420", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.602 "hdgst": false, 00:14:49.602 "ddgst": false 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 }' 00:14:49.602 17:18:19 -- nvmf/common.sh@546 -- # IFS=, 00:14:49.602 17:18:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme1", 00:14:49.602 "trtype": "tcp", 00:14:49.602 "traddr": "10.0.0.2", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "4420", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.602 "hdgst": false, 00:14:49.602 "ddgst": false 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 }' 00:14:49.602 17:18:19 -- nvmf/common.sh@545 -- # jq . 00:14:49.602 17:18:19 -- nvmf/common.sh@546 -- # IFS=, 00:14:49.602 17:18:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:49.602 "params": { 00:14:49.602 "name": "Nvme1", 00:14:49.602 "trtype": "tcp", 00:14:49.602 "traddr": "10.0.0.2", 00:14:49.602 "adrfam": "ipv4", 00:14:49.602 "trsvcid": "4420", 00:14:49.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.602 "hdgst": false, 00:14:49.602 "ddgst": false 00:14:49.602 }, 00:14:49.602 "method": "bdev_nvme_attach_controller" 00:14:49.602 }' 00:14:49.602 [2024-04-25 17:18:19.446679] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:49.602 [2024-04-25 17:18:19.446775] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:49.602 [2024-04-25 17:18:19.451215] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:49.602 [2024-04-25 17:18:19.451289] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:49.602 17:18:19 -- target/bdev_io_wait.sh@37 -- # wait 78078 00:14:49.602 [2024-04-25 17:18:19.473181] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:49.602 [2024-04-25 17:18:19.473254] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:49.602 [2024-04-25 17:18:19.488912] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:49.602 [2024-04-25 17:18:19.489023] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:49.862 [2024-04-25 17:18:19.631802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.862 [2024-04-25 17:18:19.672157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.862 [2024-04-25 17:18:19.685637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:49.862 [2024-04-25 17:18:19.711859] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.862 [2024-04-25 17:18:19.725350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.862 [2024-04-25 17:18:19.753654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.862 [2024-04-25 17:18:19.767355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.862 [2024-04-25 17:18:19.806980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.862 Running I/O for 1 seconds... 00:14:50.121 Running I/O for 1 seconds... 00:14:50.121 Running I/O for 1 seconds... 00:14:50.121 Running I/O for 1 seconds... 00:14:51.058 00:14:51.058 Latency(us) 00:14:51.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.058 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:51.058 Nvme1n1 : 1.02 6199.32 24.22 0.00 0.00 20460.03 8698.41 32887.16 00:14:51.058 =================================================================================================================== 00:14:51.058 Total : 6199.32 24.22 0.00 0.00 20460.03 8698.41 32887.16 00:14:51.058 00:14:51.058 Latency(us) 00:14:51.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.059 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:51.059 Nvme1n1 : 1.00 196321.72 766.88 0.00 0.00 649.23 284.86 1087.30 00:14:51.059 =================================================================================================================== 00:14:51.059 Total : 196321.72 766.88 0.00 0.00 649.23 284.86 1087.30 00:14:51.059 00:14:51.059 Latency(us) 00:14:51.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.059 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:51.059 Nvme1n1 : 1.01 8831.13 34.50 0.00 0.00 14425.77 5600.35 22997.18 00:14:51.059 =================================================================================================================== 00:14:51.059 Total : 8831.13 34.50 0.00 0.00 14425.77 5600.35 22997.18 00:14:51.059 00:14:51.059 Latency(us) 00:14:51.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.059 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:51.059 Nvme1n1 : 1.01 5992.42 23.41 0.00 0.00 21274.93 7268.54 46709.29 00:14:51.059 =================================================================================================================== 00:14:51.059 Total : 5992.42 23.41 0.00 0.00 21274.93 7268.54 46709.29 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@38 -- # wait 78080 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@39 -- # wait 78082 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@40 -- # wait 78084 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.318 17:18:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.318 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:14:51.318 17:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:51.318 17:18:21 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:51.318 17:18:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:51.318 17:18:21 -- nvmf/common.sh@117 -- # sync 00:14:51.318 17:18:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.318 17:18:21 -- nvmf/common.sh@120 -- # set +e 00:14:51.318 17:18:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.318 17:18:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.318 rmmod nvme_tcp 00:14:51.318 rmmod nvme_fabrics 00:14:51.318 rmmod nvme_keyring 00:14:51.318 17:18:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.318 17:18:21 -- nvmf/common.sh@124 -- # set -e 00:14:51.318 17:18:21 -- nvmf/common.sh@125 -- # return 0 00:14:51.318 17:18:21 -- nvmf/common.sh@478 -- # '[' -n 78025 ']' 00:14:51.318 17:18:21 -- nvmf/common.sh@479 -- # killprocess 78025 00:14:51.318 17:18:21 -- common/autotest_common.sh@936 -- # '[' -z 78025 ']' 00:14:51.319 17:18:21 -- common/autotest_common.sh@940 -- # kill -0 78025 00:14:51.319 17:18:21 -- common/autotest_common.sh@941 -- # uname 00:14:51.319 17:18:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.319 17:18:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78025 00:14:51.319 killing process with pid 78025 00:14:51.319 17:18:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.319 17:18:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.319 17:18:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78025' 00:14:51.319 17:18:21 -- common/autotest_common.sh@955 -- # kill 78025 00:14:51.319 17:18:21 -- common/autotest_common.sh@960 -- # wait 78025 00:14:51.577 17:18:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:51.577 17:18:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:51.577 17:18:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:51.577 17:18:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.577 17:18:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.577 17:18:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.577 17:18:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.577 17:18:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.577 17:18:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:51.577 ************************************ 00:14:51.577 END TEST nvmf_bdev_io_wait 00:14:51.577 ************************************ 00:14:51.577 00:14:51.577 real 0m3.742s 00:14:51.577 user 0m16.744s 00:14:51.577 sys 0m1.639s 00:14:51.577 17:18:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.577 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:14:51.578 17:18:21 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.578 17:18:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:51.578 17:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.578 17:18:21 -- common/autotest_common.sh@10 -- # set +x 00:14:51.837 ************************************ 00:14:51.837 START TEST nvmf_queue_depth 00:14:51.837 ************************************ 00:14:51.837 17:18:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:51.837 * Looking for test storage... 00:14:51.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.837 17:18:21 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.837 17:18:21 -- nvmf/common.sh@7 -- # uname -s 00:14:51.837 17:18:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.837 17:18:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.837 17:18:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.837 17:18:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.838 17:18:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.838 17:18:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.838 17:18:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.838 17:18:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.838 17:18:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.838 17:18:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.838 17:18:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:51.838 17:18:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:14:51.838 17:18:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.838 17:18:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.838 17:18:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.838 17:18:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.838 17:18:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.838 17:18:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.838 17:18:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.838 17:18:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.838 17:18:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.838 17:18:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.838 17:18:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.838 17:18:21 -- paths/export.sh@5 -- # export PATH 00:14:51.838 17:18:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.838 17:18:21 -- nvmf/common.sh@47 -- # : 0 00:14:51.838 17:18:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.838 17:18:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.838 17:18:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.838 17:18:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.838 17:18:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.838 17:18:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.838 17:18:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.838 17:18:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.838 17:18:21 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:51.838 17:18:21 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:51.838 17:18:21 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.838 17:18:21 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:51.838 17:18:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:51.838 17:18:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.838 17:18:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:51.838 17:18:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:51.838 17:18:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:51.838 17:18:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.838 17:18:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.838 17:18:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.838 17:18:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:51.838 17:18:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:51.838 17:18:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:51.838 17:18:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:51.838 17:18:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:51.838 17:18:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:51.838 17:18:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.838 17:18:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.838 17:18:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:51.838 17:18:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:51.838 17:18:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.838 17:18:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.838 17:18:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.838 17:18:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.838 17:18:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.838 17:18:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.838 17:18:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.838 17:18:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.838 17:18:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:51.838 17:18:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:51.838 Cannot find device "nvmf_tgt_br" 00:14:51.838 17:18:21 -- nvmf/common.sh@155 -- # true 00:14:51.838 17:18:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.838 Cannot find device "nvmf_tgt_br2" 00:14:51.838 17:18:21 -- nvmf/common.sh@156 -- # true 00:14:51.838 17:18:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:51.838 17:18:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:51.838 Cannot find device "nvmf_tgt_br" 00:14:51.838 17:18:21 -- nvmf/common.sh@158 -- # true 00:14:51.838 17:18:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:51.838 Cannot find device "nvmf_tgt_br2" 00:14:51.838 17:18:21 -- nvmf/common.sh@159 -- # true 00:14:51.838 17:18:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:51.838 17:18:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:52.097 17:18:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.097 17:18:21 -- nvmf/common.sh@162 -- # true 00:14:52.097 17:18:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.097 17:18:21 -- nvmf/common.sh@163 -- # true 00:14:52.097 17:18:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.097 17:18:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.097 17:18:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.097 17:18:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.097 17:18:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.097 17:18:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.097 17:18:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.097 17:18:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.097 17:18:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.097 17:18:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:52.097 17:18:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:52.097 17:18:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:52.098 17:18:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:52.098 17:18:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.098 17:18:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.098 17:18:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.098 17:18:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:52.098 17:18:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:52.098 17:18:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.098 17:18:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.098 17:18:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.098 17:18:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.098 17:18:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.098 17:18:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:52.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:52.098 00:14:52.098 --- 10.0.0.2 ping statistics --- 00:14:52.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.098 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:52.098 17:18:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:52.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:52.098 00:14:52.098 --- 10.0.0.3 ping statistics --- 00:14:52.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.098 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:52.098 17:18:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:52.098 00:14:52.098 --- 10.0.0.1 ping statistics --- 00:14:52.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.098 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:52.357 17:18:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.357 17:18:22 -- nvmf/common.sh@422 -- # return 0 00:14:52.357 17:18:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:52.357 17:18:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.357 17:18:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:52.357 17:18:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:52.357 17:18:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.357 17:18:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:52.357 17:18:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:52.357 17:18:22 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:52.357 17:18:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:52.357 17:18:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:52.357 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.357 17:18:22 -- nvmf/common.sh@470 -- # nvmfpid=78318 00:14:52.357 17:18:22 -- nvmf/common.sh@471 -- # waitforlisten 78318 00:14:52.357 17:18:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.357 17:18:22 -- common/autotest_common.sh@817 -- # '[' -z 78318 ']' 00:14:52.357 17:18:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.357 17:18:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.357 17:18:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.357 17:18:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.357 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.357 [2024-04-25 17:18:22.147311] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:52.357 [2024-04-25 17:18:22.147386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.357 [2024-04-25 17:18:22.283178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.615 [2024-04-25 17:18:22.351561] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.615 [2024-04-25 17:18:22.351625] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.615 [2024-04-25 17:18:22.351640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.615 [2024-04-25 17:18:22.351651] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.615 [2024-04-25 17:18:22.351660] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.615 [2024-04-25 17:18:22.351719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.616 17:18:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:52.616 17:18:22 -- common/autotest_common.sh@850 -- # return 0 00:14:52.616 17:18:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:52.616 17:18:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 17:18:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.616 17:18:22 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.616 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 [2024-04-25 17:18:22.491683] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.616 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.616 17:18:22 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.616 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 Malloc0 00:14:52.616 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.616 17:18:22 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.616 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.616 17:18:22 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.616 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.616 17:18:22 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.616 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.616 [2024-04-25 17:18:22.551346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.616 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.616 17:18:22 -- target/queue_depth.sh@30 -- # bdevperf_pid=78353 00:14:52.616 17:18:22 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.616 17:18:22 -- target/queue_depth.sh@33 -- # waitforlisten 78353 /var/tmp/bdevperf.sock 00:14:52.616 17:18:22 -- common/autotest_common.sh@817 -- # '[' -z 78353 ']' 00:14:52.616 17:18:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.616 17:18:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.616 17:18:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.616 17:18:22 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:52.616 17:18:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.616 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.875 [2024-04-25 17:18:22.602310] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:14:52.875 [2024-04-25 17:18:22.602389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78353 ] 00:14:52.875 [2024-04-25 17:18:22.736919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.875 [2024-04-25 17:18:22.788638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.134 17:18:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.134 17:18:22 -- common/autotest_common.sh@850 -- # return 0 00:14:53.134 17:18:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:53.134 17:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.134 17:18:22 -- common/autotest_common.sh@10 -- # set +x 00:14:53.134 NVMe0n1 00:14:53.134 17:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.134 17:18:22 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.134 Running I/O for 10 seconds... 00:15:05.341 00:15:05.341 Latency(us) 00:15:05.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.341 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:05.341 Verification LBA range: start 0x0 length 0x4000 00:15:05.341 NVMe0n1 : 10.07 10419.71 40.70 0.00 0.00 97851.25 21686.46 96278.34 00:15:05.341 =================================================================================================================== 00:15:05.341 Total : 10419.71 40.70 0.00 0.00 97851.25 21686.46 96278.34 00:15:05.341 0 00:15:05.341 17:18:33 -- target/queue_depth.sh@39 -- # killprocess 78353 00:15:05.341 17:18:33 -- common/autotest_common.sh@936 -- # '[' -z 78353 ']' 00:15:05.341 17:18:33 -- common/autotest_common.sh@940 -- # kill -0 78353 00:15:05.341 17:18:33 -- common/autotest_common.sh@941 -- # uname 00:15:05.341 17:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.341 17:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78353 00:15:05.341 17:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.341 17:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.341 killing process with pid 78353 00:15:05.341 17:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78353' 00:15:05.341 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.341 00:15:05.341 Latency(us) 00:15:05.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.341 =================================================================================================================== 00:15:05.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.341 17:18:33 -- common/autotest_common.sh@955 -- # kill 78353 00:15:05.341 17:18:33 -- common/autotest_common.sh@960 -- # wait 78353 00:15:05.341 17:18:33 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:05.341 17:18:33 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:05.341 17:18:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:05.341 17:18:33 -- nvmf/common.sh@117 -- # sync 00:15:05.341 17:18:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.341 17:18:33 -- nvmf/common.sh@120 -- # set +e 00:15:05.341 17:18:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.341 17:18:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.341 rmmod nvme_tcp 00:15:05.341 rmmod nvme_fabrics 00:15:05.341 rmmod nvme_keyring 00:15:05.341 17:18:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.341 17:18:33 -- nvmf/common.sh@124 -- # set -e 00:15:05.341 17:18:33 -- nvmf/common.sh@125 -- # return 0 00:15:05.341 17:18:33 -- nvmf/common.sh@478 -- # '[' -n 78318 ']' 00:15:05.341 17:18:33 -- nvmf/common.sh@479 -- # killprocess 78318 00:15:05.341 17:18:33 -- common/autotest_common.sh@936 -- # '[' -z 78318 ']' 00:15:05.341 17:18:33 -- common/autotest_common.sh@940 -- # kill -0 78318 00:15:05.341 17:18:33 -- common/autotest_common.sh@941 -- # uname 00:15:05.341 17:18:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.341 17:18:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78318 00:15:05.341 17:18:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:05.341 17:18:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:05.341 killing process with pid 78318 00:15:05.341 17:18:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78318' 00:15:05.341 17:18:33 -- common/autotest_common.sh@955 -- # kill 78318 00:15:05.341 17:18:33 -- common/autotest_common.sh@960 -- # wait 78318 00:15:05.341 17:18:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:05.341 17:18:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:05.341 17:18:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:05.341 17:18:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.341 17:18:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.341 17:18:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.341 17:18:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.341 17:18:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.341 17:18:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:05.341 00:15:05.341 real 0m12.092s 00:15:05.341 user 0m20.856s 00:15:05.341 sys 0m1.855s 00:15:05.341 17:18:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.341 ************************************ 00:15:05.341 END TEST nvmf_queue_depth 00:15:05.341 ************************************ 00:15:05.342 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:05.342 17:18:33 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.342 17:18:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:05.342 17:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.342 17:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:05.342 ************************************ 00:15:05.342 START TEST nvmf_multipath 00:15:05.342 ************************************ 00:15:05.342 17:18:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:05.342 * Looking for test storage... 00:15:05.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.342 17:18:33 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.342 17:18:33 -- nvmf/common.sh@7 -- # uname -s 00:15:05.342 17:18:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.342 17:18:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.342 17:18:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.342 17:18:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.342 17:18:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.342 17:18:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.342 17:18:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.342 17:18:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.342 17:18:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.342 17:18:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:05.342 17:18:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:05.342 17:18:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.342 17:18:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.342 17:18:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.342 17:18:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.342 17:18:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.342 17:18:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.342 17:18:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.342 17:18:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.342 17:18:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.342 17:18:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.342 17:18:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.342 17:18:33 -- paths/export.sh@5 -- # export PATH 00:15:05.342 17:18:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.342 17:18:33 -- nvmf/common.sh@47 -- # : 0 00:15:05.342 17:18:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.342 17:18:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.342 17:18:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.342 17:18:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.342 17:18:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.342 17:18:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.342 17:18:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.342 17:18:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.342 17:18:33 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.342 17:18:33 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.342 17:18:33 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:05.342 17:18:33 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.342 17:18:33 -- target/multipath.sh@43 -- # nvmftestinit 00:15:05.342 17:18:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:05.342 17:18:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.342 17:18:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:05.342 17:18:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:05.342 17:18:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:05.342 17:18:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.342 17:18:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.342 17:18:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.342 17:18:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:05.342 17:18:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:05.342 17:18:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.342 17:18:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.342 17:18:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.342 17:18:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:05.342 17:18:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.342 17:18:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.342 17:18:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.342 17:18:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.342 17:18:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.342 17:18:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.342 17:18:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.342 17:18:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.342 17:18:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:05.342 17:18:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:05.342 Cannot find device "nvmf_tgt_br" 00:15:05.342 17:18:33 -- nvmf/common.sh@155 -- # true 00:15:05.342 17:18:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.342 Cannot find device "nvmf_tgt_br2" 00:15:05.342 17:18:33 -- nvmf/common.sh@156 -- # true 00:15:05.342 17:18:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:05.342 17:18:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:05.342 Cannot find device "nvmf_tgt_br" 00:15:05.342 17:18:33 -- nvmf/common.sh@158 -- # true 00:15:05.342 17:18:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:05.342 Cannot find device "nvmf_tgt_br2" 00:15:05.342 17:18:33 -- nvmf/common.sh@159 -- # true 00:15:05.342 17:18:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:05.342 17:18:34 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:05.342 17:18:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.342 17:18:34 -- nvmf/common.sh@162 -- # true 00:15:05.342 17:18:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.342 17:18:34 -- nvmf/common.sh@163 -- # true 00:15:05.342 17:18:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.342 17:18:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.342 17:18:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.342 17:18:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.342 17:18:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:05.342 17:18:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:05.342 17:18:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:05.342 17:18:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:05.342 17:18:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:05.342 17:18:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:05.342 17:18:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:05.342 17:18:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:05.342 17:18:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:05.342 17:18:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:05.342 17:18:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:05.342 17:18:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:05.342 17:18:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:05.342 17:18:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:05.342 17:18:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:05.342 17:18:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:05.342 17:18:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:05.342 17:18:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:05.342 17:18:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:05.342 17:18:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:05.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:05.342 00:15:05.342 --- 10.0.0.2 ping statistics --- 00:15:05.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.342 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:05.342 17:18:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:05.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:05.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:05.342 00:15:05.343 --- 10.0.0.3 ping statistics --- 00:15:05.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.343 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:05.343 17:18:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:05.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:05.343 00:15:05.343 --- 10.0.0.1 ping statistics --- 00:15:05.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.343 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:05.343 17:18:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.343 17:18:34 -- nvmf/common.sh@422 -- # return 0 00:15:05.343 17:18:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:05.343 17:18:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.343 17:18:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:05.343 17:18:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:05.343 17:18:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.343 17:18:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:05.343 17:18:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:05.343 17:18:34 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:05.343 17:18:34 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:05.343 17:18:34 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:05.343 17:18:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.343 17:18:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.343 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.343 17:18:34 -- nvmf/common.sh@470 -- # nvmfpid=78672 00:15:05.343 17:18:34 -- nvmf/common.sh@471 -- # waitforlisten 78672 00:15:05.343 17:18:34 -- common/autotest_common.sh@817 -- # '[' -z 78672 ']' 00:15:05.343 17:18:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.343 17:18:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.343 17:18:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.343 17:18:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.343 17:18:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.343 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.343 [2024-04-25 17:18:34.297011] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:05.343 [2024-04-25 17:18:34.297091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.343 [2024-04-25 17:18:34.431459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.343 [2024-04-25 17:18:34.482882] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.343 [2024-04-25 17:18:34.482946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.343 [2024-04-25 17:18:34.482972] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.343 [2024-04-25 17:18:34.482979] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.343 [2024-04-25 17:18:34.482985] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.343 [2024-04-25 17:18:34.483678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.343 [2024-04-25 17:18:34.483829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.343 [2024-04-25 17:18:34.483960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.343 [2024-04-25 17:18:34.483963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.343 17:18:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.343 17:18:34 -- common/autotest_common.sh@850 -- # return 0 00:15:05.343 17:18:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:05.343 17:18:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:05.343 17:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:05.343 17:18:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.343 17:18:34 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:05.343 [2024-04-25 17:18:34.859298] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.343 17:18:34 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:05.343 Malloc0 00:15:05.343 17:18:35 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:05.602 17:18:35 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.861 17:18:35 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.861 [2024-04-25 17:18:35.830836] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.119 17:18:35 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:06.119 [2024-04-25 17:18:36.091070] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:06.378 17:18:36 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:06.378 17:18:36 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:06.637 17:18:36 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.637 17:18:36 -- common/autotest_common.sh@1184 -- # local i=0 00:15:06.637 17:18:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.637 17:18:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:06.637 17:18:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:09.171 17:18:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:09.171 17:18:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:09.171 17:18:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.171 17:18:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:09.171 17:18:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.171 17:18:38 -- common/autotest_common.sh@1194 -- # return 0 00:15:09.171 17:18:38 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:09.171 17:18:38 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:09.171 17:18:38 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:09.171 17:18:38 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:09.171 17:18:38 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:09.171 17:18:38 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:09.171 17:18:38 -- target/multipath.sh@38 -- # return 0 00:15:09.171 17:18:38 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:09.171 17:18:38 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:09.171 17:18:38 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:09.171 17:18:38 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:09.171 17:18:38 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:09.171 17:18:38 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:09.171 17:18:38 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:09.171 17:18:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:09.171 17:18:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.171 17:18:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:09.171 17:18:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:09.171 17:18:38 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.171 17:18:38 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:09.171 17:18:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:09.171 17:18:38 -- target/multipath.sh@22 -- # local timeout=20 00:15:09.171 17:18:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:09.171 17:18:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:09.171 17:18:38 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:09.171 17:18:38 -- target/multipath.sh@85 -- # echo numa 00:15:09.171 17:18:38 -- target/multipath.sh@88 -- # fio_pid=78791 00:15:09.171 17:18:38 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:09.171 17:18:38 -- target/multipath.sh@90 -- # sleep 1 00:15:09.171 [global] 00:15:09.171 thread=1 00:15:09.171 invalidate=1 00:15:09.171 rw=randrw 00:15:09.171 time_based=1 00:15:09.171 runtime=6 00:15:09.171 ioengine=libaio 00:15:09.171 direct=1 00:15:09.171 bs=4096 00:15:09.171 iodepth=128 00:15:09.171 norandommap=0 00:15:09.171 numjobs=1 00:15:09.171 00:15:09.171 verify_dump=1 00:15:09.171 verify_backlog=512 00:15:09.171 verify_state_save=0 00:15:09.171 do_verify=1 00:15:09.171 verify=crc32c-intel 00:15:09.171 [job0] 00:15:09.171 filename=/dev/nvme0n1 00:15:09.171 Could not set queue depth (nvme0n1) 00:15:09.171 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.171 fio-3.35 00:15:09.171 Starting 1 thread 00:15:09.740 17:18:39 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:09.998 17:18:39 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:10.266 17:18:40 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:10.266 17:18:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:10.266 17:18:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.266 17:18:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:10.266 17:18:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:10.266 17:18:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:10.266 17:18:40 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:10.266 17:18:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:10.266 17:18:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:10.266 17:18:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:10.266 17:18:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:10.266 17:18:40 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:10.266 17:18:40 -- target/multipath.sh@25 -- # sleep 1s 00:15:11.204 17:18:41 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:11.204 17:18:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.204 17:18:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:11.204 17:18:41 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:11.463 17:18:41 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:11.722 17:18:41 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:11.722 17:18:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:11.722 17:18:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.722 17:18:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:11.722 17:18:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:11.722 17:18:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:11.722 17:18:41 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:11.722 17:18:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:11.722 17:18:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:11.722 17:18:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:11.722 17:18:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:11.722 17:18:41 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:11.722 17:18:41 -- target/multipath.sh@25 -- # sleep 1s 00:15:12.659 17:18:42 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:12.659 17:18:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:12.659 17:18:42 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:12.659 17:18:42 -- target/multipath.sh@104 -- # wait 78791 00:15:15.195 00:15:15.195 job0: (groupid=0, jobs=1): err= 0: pid=78816: Thu Apr 25 17:18:44 2024 00:15:15.195 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(259MiB/6003msec) 00:15:15.195 slat (usec): min=4, max=5729, avg=51.90, stdev=233.63 00:15:15.195 clat (usec): min=896, max=41032, avg=7914.12, stdev=1507.45 00:15:15.195 lat (usec): min=922, max=41042, avg=7966.02, stdev=1514.74 00:15:15.195 clat percentiles (usec): 00:15:15.195 | 1.00th=[ 4752], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7111], 00:15:15.195 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:15:15.195 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[ 9896], 00:15:15.195 | 99.00th=[11731], 99.50th=[12256], 99.90th=[14091], 99.95th=[39584], 00:15:15.195 | 99.99th=[41157] 00:15:15.195 bw ( KiB/s): min= 9928, max=28848, per=51.48%, avg=22789.27, stdev=6266.58, samples=11 00:15:15.195 iops : min= 2482, max= 7212, avg=5697.27, stdev=1566.61, samples=11 00:15:15.195 write: IOPS=6437, BW=25.1MiB/s (26.4MB/s)(134MiB/5329msec); 0 zone resets 00:15:15.195 slat (usec): min=15, max=32378, avg=64.85, stdev=238.42 00:15:15.195 clat (usec): min=2478, max=39804, avg=6874.12, stdev=1708.42 00:15:15.195 lat (usec): min=2503, max=39845, avg=6938.97, stdev=1718.97 00:15:15.195 clat percentiles (usec): 00:15:15.195 | 1.00th=[ 3949], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6259], 00:15:15.195 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:15:15.195 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8094], 00:15:15.195 | 99.00th=[ 9634], 99.50th=[11076], 99.90th=[39060], 99.95th=[39584], 00:15:15.195 | 99.99th=[39584] 00:15:15.195 bw ( KiB/s): min=10024, max=28360, per=88.64%, avg=22824.27, stdev=6037.67, samples=11 00:15:15.195 iops : min= 2506, max= 7090, avg=5706.00, stdev=1509.37, samples=11 00:15:15.195 lat (usec) : 1000=0.01% 00:15:15.195 lat (msec) : 2=0.01%, 4=0.53%, 10=96.26%, 20=3.08%, 50=0.13% 00:15:15.195 cpu : usr=5.36%, sys=22.52%, ctx=6471, majf=0, minf=108 00:15:15.195 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.195 issued rwts: total=66431,34304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.195 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.195 00:15:15.195 Run status group 0 (all jobs): 00:15:15.195 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=259MiB (272MB), run=6003-6003msec 00:15:15.195 WRITE: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=134MiB (141MB), run=5329-5329msec 00:15:15.195 00:15:15.195 Disk stats (read/write): 00:15:15.195 nvme0n1: ios=65579/33631, merge=0/0, ticks=484741/214314, in_queue=699055, util=98.62% 00:15:15.195 17:18:44 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:15.195 17:18:45 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:15.454 17:18:45 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:15.454 17:18:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:15.454 17:18:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:15.454 17:18:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:15.454 17:18:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:15.454 17:18:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:15.454 17:18:45 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:15.454 17:18:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:15.454 17:18:45 -- target/multipath.sh@22 -- # local timeout=20 00:15:15.454 17:18:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:15.454 17:18:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:15.454 17:18:45 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:15.454 17:18:45 -- target/multipath.sh@25 -- # sleep 1s 00:15:16.393 17:18:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:16.393 17:18:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.393 17:18:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.393 17:18:46 -- target/multipath.sh@113 -- # echo round-robin 00:15:16.393 17:18:46 -- target/multipath.sh@116 -- # fio_pid=78942 00:15:16.393 17:18:46 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:16.393 17:18:46 -- target/multipath.sh@118 -- # sleep 1 00:15:16.393 [global] 00:15:16.393 thread=1 00:15:16.393 invalidate=1 00:15:16.393 rw=randrw 00:15:16.393 time_based=1 00:15:16.393 runtime=6 00:15:16.393 ioengine=libaio 00:15:16.393 direct=1 00:15:16.393 bs=4096 00:15:16.393 iodepth=128 00:15:16.393 norandommap=0 00:15:16.393 numjobs=1 00:15:16.393 00:15:16.393 verify_dump=1 00:15:16.393 verify_backlog=512 00:15:16.393 verify_state_save=0 00:15:16.393 do_verify=1 00:15:16.393 verify=crc32c-intel 00:15:16.393 [job0] 00:15:16.393 filename=/dev/nvme0n1 00:15:16.651 Could not set queue depth (nvme0n1) 00:15:16.651 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.651 fio-3.35 00:15:16.651 Starting 1 thread 00:15:17.587 17:18:47 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:17.846 17:18:47 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:18.105 17:18:47 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:18.105 17:18:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:18.105 17:18:47 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.105 17:18:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.105 17:18:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.105 17:18:47 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:18.105 17:18:47 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:18.105 17:18:47 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:18.105 17:18:47 -- target/multipath.sh@22 -- # local timeout=20 00:15:18.105 17:18:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.105 17:18:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.105 17:18:47 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:18.105 17:18:47 -- target/multipath.sh@25 -- # sleep 1s 00:15:19.041 17:18:48 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:19.041 17:18:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.041 17:18:48 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.041 17:18:48 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:19.299 17:18:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:19.558 17:18:49 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:19.558 17:18:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:19.558 17:18:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.558 17:18:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.558 17:18:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.558 17:18:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.558 17:18:49 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:19.558 17:18:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:19.558 17:18:49 -- target/multipath.sh@22 -- # local timeout=20 00:15:19.558 17:18:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.558 17:18:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.558 17:18:49 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.558 17:18:49 -- target/multipath.sh@25 -- # sleep 1s 00:15:20.493 17:18:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:20.493 17:18:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.493 17:18:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:20.493 17:18:50 -- target/multipath.sh@132 -- # wait 78942 00:15:23.022 00:15:23.022 job0: (groupid=0, jobs=1): err= 0: pid=78963: Thu Apr 25 17:18:52 2024 00:15:23.022 read: IOPS=12.6k, BW=49.3MiB/s (51.6MB/s)(296MiB/6004msec) 00:15:23.022 slat (usec): min=5, max=5255, avg=41.38, stdev=200.25 00:15:23.022 clat (usec): min=315, max=13977, avg=7099.08, stdev=1504.10 00:15:23.022 lat (usec): min=339, max=13987, avg=7140.46, stdev=1520.72 00:15:23.022 clat percentiles (usec): 00:15:23.022 | 1.00th=[ 2999], 5.00th=[ 4293], 10.00th=[ 5014], 20.00th=[ 6063], 00:15:23.022 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7504], 00:15:23.022 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8717], 95.00th=[ 9241], 00:15:23.022 | 99.00th=[10945], 99.50th=[11469], 99.90th=[12387], 99.95th=[12780], 00:15:23.022 | 99.99th=[13698] 00:15:23.022 bw ( KiB/s): min=13216, max=39784, per=53.29%, avg=26878.55, stdev=8221.21, samples=11 00:15:23.022 iops : min= 3304, max= 9946, avg=6719.64, stdev=2055.30, samples=11 00:15:23.022 write: IOPS=7474, BW=29.2MiB/s (30.6MB/s)(149MiB/5102msec); 0 zone resets 00:15:23.022 slat (usec): min=15, max=2122, avg=52.43, stdev=130.97 00:15:23.022 clat (usec): min=664, max=12885, avg=5887.76, stdev=1434.87 00:15:23.022 lat (usec): min=692, max=12909, avg=5940.19, stdev=1446.93 00:15:23.022 clat percentiles (usec): 00:15:23.022 | 1.00th=[ 2606], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4424], 00:15:23.022 | 30.00th=[ 5276], 40.00th=[ 5932], 50.00th=[ 6259], 60.00th=[ 6521], 00:15:23.022 | 70.00th=[ 6783], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7635], 00:15:23.022 | 99.00th=[ 8717], 99.50th=[ 9634], 99.90th=[11207], 99.95th=[11600], 00:15:23.022 | 99.99th=[12256] 00:15:23.022 bw ( KiB/s): min=13864, max=40448, per=89.85%, avg=26862.55, stdev=8013.95, samples=11 00:15:23.022 iops : min= 3466, max=10112, avg=6715.64, stdev=2003.49, samples=11 00:15:23.023 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:15:23.023 lat (msec) : 2=0.23%, 4=6.63%, 10=91.30%, 20=1.80% 00:15:23.023 cpu : usr=6.20%, sys=25.25%, ctx=7624, majf=0, minf=121 00:15:23.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:23.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.023 issued rwts: total=75709,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.023 00:15:23.023 Run status group 0 (all jobs): 00:15:23.023 READ: bw=49.3MiB/s (51.6MB/s), 49.3MiB/s-49.3MiB/s (51.6MB/s-51.6MB/s), io=296MiB (310MB), run=6004-6004msec 00:15:23.023 WRITE: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=149MiB (156MB), run=5102-5102msec 00:15:23.023 00:15:23.023 Disk stats (read/write): 00:15:23.023 nvme0n1: ios=74115/38135, merge=0/0, ticks=489145/207151, in_queue=696296, util=98.62% 00:15:23.023 17:18:52 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:23.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:23.023 17:18:52 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:23.023 17:18:52 -- common/autotest_common.sh@1205 -- # local i=0 00:15:23.023 17:18:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:23.023 17:18:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.023 17:18:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:23.023 17:18:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:23.023 17:18:52 -- common/autotest_common.sh@1217 -- # return 0 00:15:23.023 17:18:52 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.023 17:18:52 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:23.023 17:18:52 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:23.023 17:18:52 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:23.023 17:18:52 -- target/multipath.sh@144 -- # nvmftestfini 00:15:23.023 17:18:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:23.023 17:18:52 -- nvmf/common.sh@117 -- # sync 00:15:23.281 17:18:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.281 17:18:53 -- nvmf/common.sh@120 -- # set +e 00:15:23.281 17:18:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.281 17:18:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.281 rmmod nvme_tcp 00:15:23.281 rmmod nvme_fabrics 00:15:23.281 rmmod nvme_keyring 00:15:23.281 17:18:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.281 17:18:53 -- nvmf/common.sh@124 -- # set -e 00:15:23.281 17:18:53 -- nvmf/common.sh@125 -- # return 0 00:15:23.281 17:18:53 -- nvmf/common.sh@478 -- # '[' -n 78672 ']' 00:15:23.281 17:18:53 -- nvmf/common.sh@479 -- # killprocess 78672 00:15:23.281 17:18:53 -- common/autotest_common.sh@936 -- # '[' -z 78672 ']' 00:15:23.281 17:18:53 -- common/autotest_common.sh@940 -- # kill -0 78672 00:15:23.281 17:18:53 -- common/autotest_common.sh@941 -- # uname 00:15:23.281 17:18:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.281 17:18:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78672 00:15:23.281 killing process with pid 78672 00:15:23.281 17:18:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.282 17:18:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.282 17:18:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78672' 00:15:23.282 17:18:53 -- common/autotest_common.sh@955 -- # kill 78672 00:15:23.282 17:18:53 -- common/autotest_common.sh@960 -- # wait 78672 00:15:23.543 17:18:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:23.543 17:18:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:23.543 17:18:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:23.543 17:18:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.543 17:18:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.543 17:18:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.543 17:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.543 17:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.544 17:18:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:23.544 00:15:23.544 real 0m19.553s 00:15:23.544 user 1m15.830s 00:15:23.544 sys 0m7.036s 00:15:23.544 17:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:23.544 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 ************************************ 00:15:23.544 END TEST nvmf_multipath 00:15:23.544 ************************************ 00:15:23.544 17:18:53 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:23.544 17:18:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:23.544 17:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.544 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.544 ************************************ 00:15:23.544 START TEST nvmf_zcopy 00:15:23.544 ************************************ 00:15:23.544 17:18:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:23.544 * Looking for test storage... 00:15:23.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:23.802 17:18:53 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.802 17:18:53 -- nvmf/common.sh@7 -- # uname -s 00:15:23.802 17:18:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.802 17:18:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.802 17:18:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.802 17:18:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.802 17:18:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.802 17:18:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.802 17:18:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.802 17:18:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.802 17:18:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.802 17:18:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.802 17:18:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:23.802 17:18:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:23.802 17:18:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.802 17:18:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.802 17:18:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.802 17:18:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.802 17:18:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.802 17:18:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.802 17:18:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.802 17:18:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.802 17:18:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.802 17:18:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.802 17:18:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.802 17:18:53 -- paths/export.sh@5 -- # export PATH 00:15:23.803 17:18:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.803 17:18:53 -- nvmf/common.sh@47 -- # : 0 00:15:23.803 17:18:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.803 17:18:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.803 17:18:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.803 17:18:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.803 17:18:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.803 17:18:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.803 17:18:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.803 17:18:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.803 17:18:53 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:23.803 17:18:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:23.803 17:18:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.803 17:18:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:23.803 17:18:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:23.803 17:18:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:23.803 17:18:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.803 17:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.803 17:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.803 17:18:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:23.803 17:18:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:23.803 17:18:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:23.803 17:18:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:23.803 17:18:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:23.803 17:18:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:23.803 17:18:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.803 17:18:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.803 17:18:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:23.803 17:18:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:23.803 17:18:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.803 17:18:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.803 17:18:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.803 17:18:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.803 17:18:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.803 17:18:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.803 17:18:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.803 17:18:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.803 17:18:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:23.803 17:18:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:23.803 Cannot find device "nvmf_tgt_br" 00:15:23.803 17:18:53 -- nvmf/common.sh@155 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.803 Cannot find device "nvmf_tgt_br2" 00:15:23.803 17:18:53 -- nvmf/common.sh@156 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:23.803 17:18:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:23.803 Cannot find device "nvmf_tgt_br" 00:15:23.803 17:18:53 -- nvmf/common.sh@158 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:23.803 Cannot find device "nvmf_tgt_br2" 00:15:23.803 17:18:53 -- nvmf/common.sh@159 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:23.803 17:18:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:23.803 17:18:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:23.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.803 17:18:53 -- nvmf/common.sh@162 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:23.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:23.803 17:18:53 -- nvmf/common.sh@163 -- # true 00:15:23.803 17:18:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:23.803 17:18:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:23.803 17:18:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:23.803 17:18:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:23.803 17:18:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:23.803 17:18:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:23.803 17:18:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:23.803 17:18:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:23.803 17:18:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:23.803 17:18:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:23.803 17:18:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:23.803 17:18:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:23.803 17:18:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:24.089 17:18:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.089 17:18:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.089 17:18:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.089 17:18:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:24.089 17:18:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:24.089 17:18:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.089 17:18:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.089 17:18:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.089 17:18:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.089 17:18:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.089 17:18:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:24.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:24.090 00:15:24.090 --- 10.0.0.2 ping statistics --- 00:15:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.090 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:24.090 17:18:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:24.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:24.090 00:15:24.090 --- 10.0.0.3 ping statistics --- 00:15:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.090 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:24.090 17:18:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:24.090 00:15:24.090 --- 10.0.0.1 ping statistics --- 00:15:24.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.090 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:24.090 17:18:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.090 17:18:53 -- nvmf/common.sh@422 -- # return 0 00:15:24.090 17:18:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:24.090 17:18:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.090 17:18:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:24.090 17:18:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:24.090 17:18:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.090 17:18:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:24.090 17:18:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:24.090 17:18:53 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:24.090 17:18:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:24.090 17:18:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:24.090 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:24.090 17:18:53 -- nvmf/common.sh@470 -- # nvmfpid=79250 00:15:24.090 17:18:53 -- nvmf/common.sh@471 -- # waitforlisten 79250 00:15:24.090 17:18:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:24.090 17:18:53 -- common/autotest_common.sh@817 -- # '[' -z 79250 ']' 00:15:24.090 17:18:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.090 17:18:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.090 17:18:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.090 17:18:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.090 17:18:53 -- common/autotest_common.sh@10 -- # set +x 00:15:24.090 [2024-04-25 17:18:53.943163] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:24.090 [2024-04-25 17:18:53.943238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.361 [2024-04-25 17:18:54.076047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.361 [2024-04-25 17:18:54.178403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.361 [2024-04-25 17:18:54.178488] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.361 [2024-04-25 17:18:54.178517] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.361 [2024-04-25 17:18:54.178534] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.361 [2024-04-25 17:18:54.178549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.361 [2024-04-25 17:18:54.178607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.361 17:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:24.361 17:18:54 -- common/autotest_common.sh@850 -- # return 0 00:15:24.361 17:18:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:24.361 17:18:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:24.361 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.361 17:18:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.361 17:18:54 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:24.361 17:18:54 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:24.361 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.361 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.361 [2024-04-25 17:18:54.322278] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.361 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.361 17:18:54 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:24.361 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.361 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.361 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.361 17:18:54 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.361 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.361 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.361 [2024-04-25 17:18:54.338354] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.621 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.621 17:18:54 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.621 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.621 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.621 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.621 17:18:54 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:24.621 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.621 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.621 malloc0 00:15:24.621 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.621 17:18:54 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:24.621 17:18:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.621 17:18:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.621 17:18:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.621 17:18:54 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:24.621 17:18:54 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:24.621 17:18:54 -- nvmf/common.sh@521 -- # config=() 00:15:24.621 17:18:54 -- nvmf/common.sh@521 -- # local subsystem config 00:15:24.621 17:18:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:24.621 17:18:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:24.621 { 00:15:24.621 "params": { 00:15:24.621 "name": "Nvme$subsystem", 00:15:24.621 "trtype": "$TEST_TRANSPORT", 00:15:24.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.621 "adrfam": "ipv4", 00:15:24.621 "trsvcid": "$NVMF_PORT", 00:15:24.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.621 "hdgst": ${hdgst:-false}, 00:15:24.621 "ddgst": ${ddgst:-false} 00:15:24.621 }, 00:15:24.621 "method": "bdev_nvme_attach_controller" 00:15:24.621 } 00:15:24.621 EOF 00:15:24.621 )") 00:15:24.621 17:18:54 -- nvmf/common.sh@543 -- # cat 00:15:24.621 17:18:54 -- nvmf/common.sh@545 -- # jq . 00:15:24.621 17:18:54 -- nvmf/common.sh@546 -- # IFS=, 00:15:24.621 17:18:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:24.621 "params": { 00:15:24.621 "name": "Nvme1", 00:15:24.621 "trtype": "tcp", 00:15:24.621 "traddr": "10.0.0.2", 00:15:24.621 "adrfam": "ipv4", 00:15:24.621 "trsvcid": "4420", 00:15:24.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.621 "hdgst": false, 00:15:24.621 "ddgst": false 00:15:24.621 }, 00:15:24.621 "method": "bdev_nvme_attach_controller" 00:15:24.621 }' 00:15:24.621 [2024-04-25 17:18:54.433252] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:24.621 [2024-04-25 17:18:54.433337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79283 ] 00:15:24.621 [2024-04-25 17:18:54.569915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.879 [2024-04-25 17:18:54.619066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.879 Running I/O for 10 seconds... 00:15:34.859 00:15:34.859 Latency(us) 00:15:34.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.859 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:34.859 Verification LBA range: start 0x0 length 0x1000 00:15:34.859 Nvme1n1 : 10.01 7133.46 55.73 0.00 0.00 17890.04 350.02 24427.05 00:15:34.859 =================================================================================================================== 00:15:34.859 Total : 7133.46 55.73 0.00 0.00 17890.04 350.02 24427.05 00:15:35.118 17:19:04 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:35.118 17:19:04 -- target/zcopy.sh@39 -- # perfpid=79400 00:15:35.118 17:19:04 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:35.118 17:19:04 -- common/autotest_common.sh@10 -- # set +x 00:15:35.118 17:19:04 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:35.118 17:19:04 -- nvmf/common.sh@521 -- # config=() 00:15:35.118 17:19:04 -- nvmf/common.sh@521 -- # local subsystem config 00:15:35.118 17:19:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:35.118 17:19:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:35.118 { 00:15:35.118 "params": { 00:15:35.118 "name": "Nvme$subsystem", 00:15:35.118 "trtype": "$TEST_TRANSPORT", 00:15:35.118 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:35.118 "adrfam": "ipv4", 00:15:35.118 "trsvcid": "$NVMF_PORT", 00:15:35.118 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:35.118 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:35.118 "hdgst": ${hdgst:-false}, 00:15:35.118 "ddgst": ${ddgst:-false} 00:15:35.118 }, 00:15:35.118 "method": "bdev_nvme_attach_controller" 00:15:35.118 } 00:15:35.118 EOF 00:15:35.118 )") 00:15:35.118 17:19:04 -- nvmf/common.sh@543 -- # cat 00:15:35.118 [2024-04-25 17:19:04.938113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.118 [2024-04-25 17:19:04.938179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.118 17:19:04 -- nvmf/common.sh@545 -- # jq . 00:15:35.118 2024/04/25 17:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.118 17:19:04 -- nvmf/common.sh@546 -- # IFS=, 00:15:35.118 17:19:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:35.118 "params": { 00:15:35.118 "name": "Nvme1", 00:15:35.118 "trtype": "tcp", 00:15:35.118 "traddr": "10.0.0.2", 00:15:35.118 "adrfam": "ipv4", 00:15:35.118 "trsvcid": "4420", 00:15:35.118 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.118 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:35.118 "hdgst": false, 00:15:35.118 "ddgst": false 00:15:35.118 }, 00:15:35.118 "method": "bdev_nvme_attach_controller" 00:15:35.118 }' 00:15:35.118 [2024-04-25 17:19:04.950095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.118 [2024-04-25 17:19:04.950136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.118 2024/04/25 17:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.118 [2024-04-25 17:19:04.962066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.118 [2024-04-25 17:19:04.962091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.118 2024/04/25 17:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.118 [2024-04-25 17:19:04.974084] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.118 [2024-04-25 17:19:04.974108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.118 2024/04/25 17:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:04.986072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:04.986095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:04.998120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:04.998148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 [2024-04-25 17:19:05.000413] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:35.119 [2024-04-25 17:19:05.000498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79400 ] 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.010109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.010132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.022095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.022117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.034098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.034121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.046067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.046091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.058103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.058126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.070076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.070098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.082076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.082098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.119 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.119 [2024-04-25 17:19:05.094101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.119 [2024-04-25 17:19:05.094124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.106099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.106121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.118087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.118109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.130122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.130145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.140670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.379 [2024-04-25 17:19:05.142149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.142175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.154133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.154167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.166137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.166160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.178157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.178187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.190142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.190166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 [2024-04-25 17:19:05.193462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.202153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.202176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.214207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.214243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.226170] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.226206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.238172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.238206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.250158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.250186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.262172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.262203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.274168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.274194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.379 [2024-04-25 17:19:05.286160] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.379 [2024-04-25 17:19:05.286188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.379 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.380 [2024-04-25 17:19:05.298179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.380 [2024-04-25 17:19:05.298206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.380 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.380 [2024-04-25 17:19:05.310175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.380 [2024-04-25 17:19:05.310201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.380 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.380 [2024-04-25 17:19:05.322195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.380 [2024-04-25 17:19:05.322225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.380 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.380 Running I/O for 5 seconds... 00:15:35.380 [2024-04-25 17:19:05.334213] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.380 [2024-04-25 17:19:05.334240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.380 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.380 [2024-04-25 17:19:05.350332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.380 [2024-04-25 17:19:05.350376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.380 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.366599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.366645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.383338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.383385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.399564] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.399596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.416358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.416417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.432130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.432178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.446797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.446844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.462434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.462489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.472323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.639 [2024-04-25 17:19:05.472370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.639 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.639 [2024-04-25 17:19:05.486329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.486374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.503016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.503062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.519363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.519411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.537011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.537058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.551329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.551375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.561273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.561318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.574893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.574939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.590260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.590305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.640 [2024-04-25 17:19:05.601480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.640 [2024-04-25 17:19:05.601525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.640 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.618825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.618873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.899 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.634387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.634432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.899 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.649244] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.649289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.899 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.664362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.664409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.899 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.673722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.673761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.899 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.899 [2024-04-25 17:19:05.689037] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.899 [2024-04-25 17:19:05.689082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.700759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.700813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.715747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.715791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.727259] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.727289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.743675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.743742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.760336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.760384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.777557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.777602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.793099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.793143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.804330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.804362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.821303] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.821348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.836020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.836065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.851943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.851987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:35.900 [2024-04-25 17:19:05.869391] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:35.900 [2024-04-25 17:19:05.869436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.900 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.159 [2024-04-25 17:19:05.883798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.159 [2024-04-25 17:19:05.883843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.159 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.159 [2024-04-25 17:19:05.898878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.159 [2024-04-25 17:19:05.898923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.159 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.159 [2024-04-25 17:19:05.914773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.159 [2024-04-25 17:19:05.914818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.159 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.159 [2024-04-25 17:19:05.930911] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.930956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:05.948136] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.948181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:05.963680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.963755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:05.974639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.974684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:05.983034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.983064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:05.998303] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:05.998349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.013967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.014012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.030578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.030622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.048000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.048044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.064619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.064664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.076737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.076808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.092725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.092781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.102197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.102241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.116185] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.116231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.160 [2024-04-25 17:19:06.132761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.160 [2024-04-25 17:19:06.132834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.160 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.419 [2024-04-25 17:19:06.147533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.419 [2024-04-25 17:19:06.147577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.419 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.419 [2024-04-25 17:19:06.163688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.419 [2024-04-25 17:19:06.163742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.419 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.419 [2024-04-25 17:19:06.180560] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.180607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.197807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.197852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.213845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.213890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.231425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.231469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.245423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.245467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.261435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.261480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.278987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.279033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.295553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.295599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.312374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.312422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.329191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.329236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.345189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.345365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.362468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.362639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.420 [2024-04-25 17:19:06.379152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.420 [2024-04-25 17:19:06.379324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.420 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.397583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.397616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.411479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.411508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.426100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.426129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.442515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.442545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.460457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.460490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.475505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.475534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.490373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.490404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.507124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.507154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.524350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.524383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.541808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.541840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.557408] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.557439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.574803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.574834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.590485] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.590519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.602075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.602269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.619867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.620041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.680 [2024-04-25 17:19:06.634018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.680 [2024-04-25 17:19:06.634185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.680 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.681 [2024-04-25 17:19:06.650773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.681 [2024-04-25 17:19:06.650822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.681 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.667166] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.667197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.683666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.683698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.701026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.701058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.716251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.716320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.732928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.732956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.747684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.747739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.763584] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.763614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.780488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.780520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.796073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.796122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.811936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.811966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.829478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.829507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.845547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.845577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.857674] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.857730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.873429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.873459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.890156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.890185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:36.941 [2024-04-25 17:19:06.907643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:36.941 [2024-04-25 17:19:06.907673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.941 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:06.924027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:06.924058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:06.941788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:06.941818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:06.955995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:06.956026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:06.971797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:06.971827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:06.989686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:06.989759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.003376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.003407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.020111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.020140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.035536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.035567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.050549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.050580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.061707] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.061762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.078331] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.078361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.087843] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.087873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.101815] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.101843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.117235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.117266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.134793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.134823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.201 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.201 [2024-04-25 17:19:07.150052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.201 [2024-04-25 17:19:07.150083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.202 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.202 [2024-04-25 17:19:07.165812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.202 [2024-04-25 17:19:07.165840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.202 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.462 [2024-04-25 17:19:07.178179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.462 [2024-04-25 17:19:07.178211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.462 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.462 [2024-04-25 17:19:07.189417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.462 [2024-04-25 17:19:07.189446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.462 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.198195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.198225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.212390] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.212424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.227567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.227596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.239394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.239423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.256683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.256739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.271736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.271764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.283372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.283403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.299996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.300025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.317248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.317279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.333437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.333467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.350784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.350813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.367906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.367935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.384295] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.384342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.400996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.401024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.417149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.417178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.463 [2024-04-25 17:19:07.433825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.463 [2024-04-25 17:19:07.433854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.463 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.450039] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.450068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.461216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.461244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.478119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.478148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.492294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.492341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.507772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.507801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.525496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.525528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.541138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.541169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.558367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.558396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.573002] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.573030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.588114] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.588142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.600506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.600553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.616075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.616106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.634298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.634328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.648889] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.648918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.665045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.665106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.680809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.680873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.723 [2024-04-25 17:19:07.690833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.723 [2024-04-25 17:19:07.690863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.723 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.982 [2024-04-25 17:19:07.705511] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.982 [2024-04-25 17:19:07.705541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.982 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.982 [2024-04-25 17:19:07.721986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.722017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.742980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.743011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.759691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.759731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.777046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.777077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.792140] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.792171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.808121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.808152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.825147] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.825178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.841122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.841154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.858388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.858419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.874146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.874175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.885431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.885460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.902162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.902191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.918126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.918156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.929388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.929417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:37.983 [2024-04-25 17:19:07.946052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:37.983 [2024-04-25 17:19:07.946081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:37.983 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.242 [2024-04-25 17:19:07.962779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.242 [2024-04-25 17:19:07.962807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.242 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.242 [2024-04-25 17:19:07.978147] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.242 [2024-04-25 17:19:07.978177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.242 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:07.993280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:07.993309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.004494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.004526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.020893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.020921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.037645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.037674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.054135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.054165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.070531] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.070573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.087304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.087349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.104845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.104889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.120197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.120241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.131848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.131893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.148734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.148791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.163202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.163246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.178673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.178725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.196204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.196248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.243 [2024-04-25 17:19:08.211830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.243 [2024-04-25 17:19:08.211875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.243 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.502 [2024-04-25 17:19:08.223363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.502 [2024-04-25 17:19:08.223408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.502 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.502 [2024-04-25 17:19:08.239429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.502 [2024-04-25 17:19:08.239473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.502 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.257085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.257130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.273095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.273136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.290561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.290604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.307014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.307059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.324434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.324466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.339868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.339913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.350890] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.350935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.367402] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.367446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.382996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.383041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.398111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.398152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.414314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.414359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.430721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.430777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.447274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.447319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.503 [2024-04-25 17:19:08.465232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.503 [2024-04-25 17:19:08.465278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.503 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.762 [2024-04-25 17:19:08.481761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.762 [2024-04-25 17:19:08.481805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.762 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.762 [2024-04-25 17:19:08.498652] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.762 [2024-04-25 17:19:08.498697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.762 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.762 [2024-04-25 17:19:08.515754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.762 [2024-04-25 17:19:08.515798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.762 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.762 [2024-04-25 17:19:08.532519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.762 [2024-04-25 17:19:08.532550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.762 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.762 [2024-04-25 17:19:08.549386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.762 [2024-04-25 17:19:08.549431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.762 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.565694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.565749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.583103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.583148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.599558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.599602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.617045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.617090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.633422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.633466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.649980] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.650011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.665546] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.665588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.676884] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.676913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.693883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.693917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.709458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.709502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:38.763 [2024-04-25 17:19:08.725631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:38.763 [2024-04-25 17:19:08.725677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:38.763 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.743284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.743315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.758580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.758626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.769839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.769870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.786527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.786572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.802176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.802220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.813934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.813966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.829205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.829250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.846452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.846497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.863006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.863037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.879413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.879459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.895476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.895522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.912908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.912953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.929535] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.929579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.945055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.945101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.961759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.961802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.978523] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.978569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.023 [2024-04-25 17:19:08.995279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.023 [2024-04-25 17:19:08.995327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.023 2024/04/25 17:19:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.011564] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.011611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.023127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.023174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.040177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.040222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.054681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.054755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.070788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.070833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.082438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.082482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.098048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.098093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.115126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.115171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.131992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.132037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.148580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.148642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.165438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.165482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.182146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.182191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.197382] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.197428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.213020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.213066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.230022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.230066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.283 [2024-04-25 17:19:09.246790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.283 [2024-04-25 17:19:09.246834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.283 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.264047] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.264093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.278906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.278950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.294589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.294636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.311086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.311130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.328626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.328671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.345522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.345567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.360996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.361040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.378055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.378099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.392499] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.392529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.408417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.408448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.425236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.425281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.442322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.442367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.453536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.453581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.470404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.470449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.480034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.480080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.494371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.494415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.543 [2024-04-25 17:19:09.509899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.543 [2024-04-25 17:19:09.509944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.543 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.528046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.528090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.545153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.545198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.561093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.561136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.573035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.573080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.590214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.590258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.599621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.599665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.612966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.613012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.629329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.629373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.646041] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.646086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.662848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.662893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.679099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.679144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.803 [2024-04-25 17:19:09.695947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.803 [2024-04-25 17:19:09.695991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.803 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.804 [2024-04-25 17:19:09.710379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.804 [2024-04-25 17:19:09.710428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.804 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.804 [2024-04-25 17:19:09.726445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.804 [2024-04-25 17:19:09.726490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.804 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.804 [2024-04-25 17:19:09.743359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.804 [2024-04-25 17:19:09.743403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.804 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.804 [2024-04-25 17:19:09.761032] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.804 [2024-04-25 17:19:09.761076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.804 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:39.804 [2024-04-25 17:19:09.775945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:39.804 [2024-04-25 17:19:09.775990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:39.804 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.791341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.791386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.808063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.808094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.824063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.824093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.841567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.841612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.853158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.853203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.868845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.868889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.885736] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.885781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.902777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.902821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.920590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.920645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.935591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.935620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.951646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.063 [2024-04-25 17:19:09.951675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.063 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.063 [2024-04-25 17:19:09.969060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.064 [2024-04-25 17:19:09.969089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.064 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.064 [2024-04-25 17:19:09.980235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.064 [2024-04-25 17:19:09.980264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.064 2024/04/25 17:19:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.064 [2024-04-25 17:19:09.996086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.064 [2024-04-25 17:19:09.996115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.064 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.064 [2024-04-25 17:19:10.014377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.064 [2024-04-25 17:19:10.014409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.064 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.064 [2024-04-25 17:19:10.029459] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.064 [2024-04-25 17:19:10.029509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.064 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.045873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.045903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.323 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.057834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.057866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.323 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.075244] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.075276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.323 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.089874] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.089905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.323 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.105068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.105099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.323 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.323 [2024-04-25 17:19:10.120001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.323 [2024-04-25 17:19:10.120032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.135694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.135749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.152951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.152981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.162417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.162447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.176468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.176501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.186557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.186587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.201345] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.201376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.211285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.211315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.225289] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.225319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.240323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.240355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.255881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.255910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.272995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.273025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.324 [2024-04-25 17:19:10.289868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.324 [2024-04-25 17:19:10.289897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.324 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.306068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.306113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.323192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.323236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.337164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.337208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 00:15:40.584 Latency(us) 00:15:40.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.584 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:40.584 Nvme1n1 : 5.01 12933.70 101.04 0.00 0.00 9884.43 4140.68 19899.11 00:15:40.584 =================================================================================================================== 00:15:40.584 Total : 12933.70 101.04 0.00 0.00 9884.43 4140.68 19899.11 00:15:40.584 [2024-04-25 17:19:10.346795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.346824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.358779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.358805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.370816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.370853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.382819] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.382854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.394820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.394859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.406794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.406826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.418808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.418841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.584 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.584 [2024-04-25 17:19:10.430787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.584 [2024-04-25 17:19:10.430809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.442766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.442787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.454820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.454855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.466800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.466824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.478774] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.478795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.490812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.490856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.502800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.502842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 [2024-04-25 17:19:10.514792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:40.585 [2024-04-25 17:19:10.514817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:40.585 2024/04/25 17:19:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:40.585 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (79400) - No such process 00:15:40.585 17:19:10 -- target/zcopy.sh@49 -- # wait 79400 00:15:40.585 17:19:10 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.585 17:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.585 17:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.585 17:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.585 17:19:10 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:40.585 17:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.585 17:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.585 delay0 00:15:40.585 17:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.585 17:19:10 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:40.585 17:19:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.585 17:19:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.585 17:19:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.585 17:19:10 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:40.844 [2024-04-25 17:19:10.704978] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:47.405 Initializing NVMe Controllers 00:15:47.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.405 Initialization complete. Launching workers. 00:15:47.405 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:15:47.405 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:15:47.405 success 194, unsuccess 180, failed 0 00:15:47.405 17:19:16 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:47.405 17:19:16 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:47.405 17:19:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:47.405 17:19:16 -- nvmf/common.sh@117 -- # sync 00:15:47.405 17:19:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.405 17:19:16 -- nvmf/common.sh@120 -- # set +e 00:15:47.405 17:19:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.405 17:19:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.405 rmmod nvme_tcp 00:15:47.405 rmmod nvme_fabrics 00:15:47.405 rmmod nvme_keyring 00:15:47.405 17:19:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.405 17:19:16 -- nvmf/common.sh@124 -- # set -e 00:15:47.405 17:19:16 -- nvmf/common.sh@125 -- # return 0 00:15:47.405 17:19:16 -- nvmf/common.sh@478 -- # '[' -n 79250 ']' 00:15:47.405 17:19:16 -- nvmf/common.sh@479 -- # killprocess 79250 00:15:47.405 17:19:16 -- common/autotest_common.sh@936 -- # '[' -z 79250 ']' 00:15:47.405 17:19:16 -- common/autotest_common.sh@940 -- # kill -0 79250 00:15:47.405 17:19:16 -- common/autotest_common.sh@941 -- # uname 00:15:47.405 17:19:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.405 17:19:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79250 00:15:47.405 killing process with pid 79250 00:15:47.405 17:19:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:47.405 17:19:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:47.405 17:19:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79250' 00:15:47.405 17:19:16 -- common/autotest_common.sh@955 -- # kill 79250 00:15:47.405 17:19:16 -- common/autotest_common.sh@960 -- # wait 79250 00:15:47.405 17:19:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:47.405 17:19:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:47.405 17:19:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:47.405 17:19:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.405 17:19:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.405 17:19:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.405 17:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.405 17:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.405 17:19:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:47.405 00:15:47.405 real 0m23.635s 00:15:47.405 user 0m38.708s 00:15:47.405 sys 0m6.314s 00:15:47.405 17:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.405 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.405 ************************************ 00:15:47.405 END TEST nvmf_zcopy 00:15:47.405 ************************************ 00:15:47.405 17:19:17 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:47.405 17:19:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.405 17:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.405 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.405 ************************************ 00:15:47.405 START TEST nvmf_nmic 00:15:47.405 ************************************ 00:15:47.405 17:19:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:47.405 * Looking for test storage... 00:15:47.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:47.405 17:19:17 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.405 17:19:17 -- nvmf/common.sh@7 -- # uname -s 00:15:47.405 17:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.405 17:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.405 17:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.405 17:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.405 17:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.405 17:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.405 17:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.405 17:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.406 17:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.406 17:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.406 17:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:47.406 17:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:47.406 17:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.406 17:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.406 17:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.406 17:19:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.406 17:19:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.406 17:19:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.406 17:19:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.406 17:19:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.406 17:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.406 17:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.406 17:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.406 17:19:17 -- paths/export.sh@5 -- # export PATH 00:15:47.406 17:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.406 17:19:17 -- nvmf/common.sh@47 -- # : 0 00:15:47.406 17:19:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.406 17:19:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.406 17:19:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.406 17:19:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.406 17:19:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.406 17:19:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.406 17:19:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.406 17:19:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.406 17:19:17 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.406 17:19:17 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.406 17:19:17 -- target/nmic.sh@14 -- # nvmftestinit 00:15:47.406 17:19:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:47.406 17:19:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.406 17:19:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:47.406 17:19:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:47.406 17:19:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:47.406 17:19:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.406 17:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.406 17:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.406 17:19:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:47.406 17:19:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:47.406 17:19:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:47.406 17:19:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:47.406 17:19:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:47.406 17:19:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:47.406 17:19:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.406 17:19:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.406 17:19:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:47.406 17:19:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:47.406 17:19:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.406 17:19:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.406 17:19:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.406 17:19:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.406 17:19:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.406 17:19:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.406 17:19:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.406 17:19:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.406 17:19:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:47.406 17:19:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:47.406 Cannot find device "nvmf_tgt_br" 00:15:47.406 17:19:17 -- nvmf/common.sh@155 -- # true 00:15:47.406 17:19:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.406 Cannot find device "nvmf_tgt_br2" 00:15:47.406 17:19:17 -- nvmf/common.sh@156 -- # true 00:15:47.406 17:19:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:47.406 17:19:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:47.406 Cannot find device "nvmf_tgt_br" 00:15:47.406 17:19:17 -- nvmf/common.sh@158 -- # true 00:15:47.406 17:19:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:47.406 Cannot find device "nvmf_tgt_br2" 00:15:47.406 17:19:17 -- nvmf/common.sh@159 -- # true 00:15:47.406 17:19:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:47.665 17:19:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:47.665 17:19:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.665 17:19:17 -- nvmf/common.sh@162 -- # true 00:15:47.665 17:19:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:47.665 17:19:17 -- nvmf/common.sh@163 -- # true 00:15:47.665 17:19:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:47.665 17:19:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:47.665 17:19:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:47.665 17:19:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:47.665 17:19:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:47.665 17:19:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:47.665 17:19:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:47.665 17:19:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:47.665 17:19:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:47.665 17:19:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:47.665 17:19:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:47.665 17:19:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:47.665 17:19:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:47.665 17:19:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:47.665 17:19:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:47.665 17:19:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:47.665 17:19:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:47.665 17:19:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:47.665 17:19:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:47.665 17:19:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:47.665 17:19:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:47.665 17:19:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:47.665 17:19:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:47.665 17:19:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:47.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:47.665 00:15:47.665 --- 10.0.0.2 ping statistics --- 00:15:47.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.665 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:47.665 17:19:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:47.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:15:47.665 00:15:47.665 --- 10.0.0.3 ping statistics --- 00:15:47.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.665 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:47.665 17:19:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:47.665 00:15:47.665 --- 10.0.0.1 ping statistics --- 00:15:47.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.665 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:47.665 17:19:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.665 17:19:17 -- nvmf/common.sh@422 -- # return 0 00:15:47.665 17:19:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:47.665 17:19:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.665 17:19:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:47.665 17:19:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:47.665 17:19:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.665 17:19:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:47.665 17:19:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:47.947 17:19:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:47.947 17:19:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:47.947 17:19:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:47.947 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.947 17:19:17 -- nvmf/common.sh@470 -- # nvmfpid=79719 00:15:47.947 17:19:17 -- nvmf/common.sh@471 -- # waitforlisten 79719 00:15:47.947 17:19:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.947 17:19:17 -- common/autotest_common.sh@817 -- # '[' -z 79719 ']' 00:15:47.947 17:19:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.947 17:19:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.947 17:19:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.947 17:19:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.947 17:19:17 -- common/autotest_common.sh@10 -- # set +x 00:15:47.947 [2024-04-25 17:19:17.717118] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:47.947 [2024-04-25 17:19:17.717200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.947 [2024-04-25 17:19:17.854013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.218 [2024-04-25 17:19:17.919132] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.218 [2024-04-25 17:19:17.919196] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.218 [2024-04-25 17:19:17.919223] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.218 [2024-04-25 17:19:17.919231] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.218 [2024-04-25 17:19:17.919237] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.218 [2024-04-25 17:19:17.919418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.218 [2024-04-25 17:19:17.919571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.218 [2024-04-25 17:19:17.920127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.218 [2024-04-25 17:19:17.920129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.784 17:19:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.785 17:19:18 -- common/autotest_common.sh@850 -- # return 0 00:15:48.785 17:19:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:48.785 17:19:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:48.785 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:48.785 17:19:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.785 17:19:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:48.785 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.785 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:48.785 [2024-04-25 17:19:18.747934] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 Malloc0 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 [2024-04-25 17:19:18.807413] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:49.044 test case1: single bdev can't be used in multiple subsystems 00:15:49.044 17:19:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@28 -- # nmic_status=0 00:15:49.044 17:19:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 [2024-04-25 17:19:18.831297] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:49.044 [2024-04-25 17:19:18.831346] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:49.044 [2024-04-25 17:19:18.831374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.044 2024/04/25 17:19:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.044 request: 00:15:49.044 { 00:15:49.044 "method": "nvmf_subsystem_add_ns", 00:15:49.044 "params": { 00:15:49.044 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:49.044 "namespace": { 00:15:49.044 "bdev_name": "Malloc0", 00:15:49.044 "no_auto_visible": false 00:15:49.044 } 00:15:49.044 } 00:15:49.044 } 00:15:49.044 Got JSON-RPC error response 00:15:49.044 GoRPCClient: error on JSON-RPC call 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@29 -- # nmic_status=1 00:15:49.044 17:19:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:49.044 Adding namespace failed - expected result. 00:15:49.044 17:19:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:49.044 test case2: host connect to nvmf target in multiple paths 00:15:49.044 17:19:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:49.044 17:19:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:49.044 17:19:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.044 17:19:18 -- common/autotest_common.sh@10 -- # set +x 00:15:49.044 [2024-04-25 17:19:18.843387] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:49.044 17:19:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.044 17:19:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:49.044 17:19:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:49.303 17:19:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:49.303 17:19:19 -- common/autotest_common.sh@1184 -- # local i=0 00:15:49.303 17:19:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.303 17:19:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:49.303 17:19:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:51.207 17:19:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:51.207 17:19:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:51.207 17:19:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.465 17:19:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:51.465 17:19:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.465 17:19:21 -- common/autotest_common.sh@1194 -- # return 0 00:15:51.465 17:19:21 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:51.465 [global] 00:15:51.465 thread=1 00:15:51.465 invalidate=1 00:15:51.465 rw=write 00:15:51.465 time_based=1 00:15:51.465 runtime=1 00:15:51.465 ioengine=libaio 00:15:51.465 direct=1 00:15:51.465 bs=4096 00:15:51.465 iodepth=1 00:15:51.465 norandommap=0 00:15:51.465 numjobs=1 00:15:51.465 00:15:51.465 verify_dump=1 00:15:51.465 verify_backlog=512 00:15:51.465 verify_state_save=0 00:15:51.465 do_verify=1 00:15:51.465 verify=crc32c-intel 00:15:51.465 [job0] 00:15:51.465 filename=/dev/nvme0n1 00:15:51.465 Could not set queue depth (nvme0n1) 00:15:51.465 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.465 fio-3.35 00:15:51.465 Starting 1 thread 00:15:52.843 00:15:52.843 job0: (groupid=0, jobs=1): err= 0: pid=79829: Thu Apr 25 17:19:22 2024 00:15:52.843 read: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 00:15:52.843 slat (nsec): min=11868, max=62212, avg=14856.60, stdev=4247.07 00:15:52.843 clat (usec): min=122, max=545, avg=148.92, stdev=23.90 00:15:52.843 lat (usec): min=135, max=570, avg=163.78, stdev=24.54 00:15:52.843 clat percentiles (usec): 00:15:52.843 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 135], 00:15:52.843 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:15:52.843 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 186], 00:15:52.843 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 510], 99.95th=[ 537], 00:15:52.843 | 99.99th=[ 545] 00:15:52.843 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:52.843 slat (nsec): min=16415, max=77039, avg=22425.77, stdev=6508.63 00:15:52.843 clat (usec): min=84, max=418, avg=105.69, stdev=20.13 00:15:52.843 lat (usec): min=101, max=468, avg=128.12, stdev=21.90 00:15:52.843 clat percentiles (usec): 00:15:52.843 | 1.00th=[ 88], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:15:52.843 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 102], 00:15:52.843 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 128], 95.00th=[ 141], 00:15:52.843 | 99.00th=[ 165], 99.50th=[ 182], 99.90th=[ 355], 99.95th=[ 379], 00:15:52.843 | 99.99th=[ 420] 00:15:52.843 bw ( KiB/s): min=14632, max=14632, per=100.00%, avg=14632.00, stdev= 0.00, samples=1 00:15:52.843 iops : min= 3658, max= 3658, avg=3658.00, stdev= 0.00, samples=1 00:15:52.843 lat (usec) : 100=27.88%, 250=71.84%, 500=0.22%, 750=0.06% 00:15:52.843 cpu : usr=2.40%, sys=9.30%, ctx=6821, majf=0, minf=2 00:15:52.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.843 issued rwts: total=3237,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.843 00:15:52.843 Run status group 0 (all jobs): 00:15:52.843 READ: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=12.6MiB (13.3MB), run=1001-1001msec 00:15:52.843 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:15:52.843 00:15:52.843 Disk stats (read/write): 00:15:52.843 nvme0n1: ios=3044/3072, merge=0/0, ticks=479/362, in_queue=841, util=90.96% 00:15:52.843 17:19:22 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:52.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:52.843 17:19:22 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:52.843 17:19:22 -- common/autotest_common.sh@1205 -- # local i=0 00:15:52.843 17:19:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:52.843 17:19:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.843 17:19:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:52.843 17:19:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:52.843 17:19:22 -- common/autotest_common.sh@1217 -- # return 0 00:15:52.843 17:19:22 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:52.843 17:19:22 -- target/nmic.sh@53 -- # nvmftestfini 00:15:52.843 17:19:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:52.843 17:19:22 -- nvmf/common.sh@117 -- # sync 00:15:52.843 17:19:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.843 17:19:22 -- nvmf/common.sh@120 -- # set +e 00:15:52.843 17:19:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.843 17:19:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.843 rmmod nvme_tcp 00:15:52.843 rmmod nvme_fabrics 00:15:52.843 rmmod nvme_keyring 00:15:52.843 17:19:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.843 17:19:22 -- nvmf/common.sh@124 -- # set -e 00:15:52.843 17:19:22 -- nvmf/common.sh@125 -- # return 0 00:15:52.843 17:19:22 -- nvmf/common.sh@478 -- # '[' -n 79719 ']' 00:15:52.843 17:19:22 -- nvmf/common.sh@479 -- # killprocess 79719 00:15:52.843 17:19:22 -- common/autotest_common.sh@936 -- # '[' -z 79719 ']' 00:15:52.843 17:19:22 -- common/autotest_common.sh@940 -- # kill -0 79719 00:15:52.843 17:19:22 -- common/autotest_common.sh@941 -- # uname 00:15:52.843 17:19:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.843 17:19:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79719 00:15:52.843 17:19:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.843 17:19:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.843 killing process with pid 79719 00:15:52.843 17:19:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79719' 00:15:52.843 17:19:22 -- common/autotest_common.sh@955 -- # kill 79719 00:15:52.843 17:19:22 -- common/autotest_common.sh@960 -- # wait 79719 00:15:53.102 17:19:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:53.102 17:19:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:53.102 17:19:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:53.102 17:19:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.102 17:19:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.102 17:19:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.102 17:19:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.102 17:19:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.102 17:19:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:53.102 00:15:53.102 real 0m5.727s 00:15:53.102 user 0m19.230s 00:15:53.102 sys 0m1.411s 00:15:53.102 17:19:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.102 ************************************ 00:15:53.102 END TEST nvmf_nmic 00:15:53.102 ************************************ 00:15:53.102 17:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:53.102 17:19:22 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:53.102 17:19:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.102 17:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.102 17:19:22 -- common/autotest_common.sh@10 -- # set +x 00:15:53.102 ************************************ 00:15:53.102 START TEST nvmf_fio_target 00:15:53.102 ************************************ 00:15:53.102 17:19:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:53.102 * Looking for test storage... 00:15:53.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.361 17:19:23 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.361 17:19:23 -- nvmf/common.sh@7 -- # uname -s 00:15:53.361 17:19:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.361 17:19:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.361 17:19:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.361 17:19:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.361 17:19:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.361 17:19:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.361 17:19:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.361 17:19:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.361 17:19:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.361 17:19:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:53.361 17:19:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:15:53.361 17:19:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.361 17:19:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.361 17:19:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.361 17:19:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.361 17:19:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.361 17:19:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.361 17:19:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.361 17:19:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.361 17:19:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.361 17:19:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.361 17:19:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.361 17:19:23 -- paths/export.sh@5 -- # export PATH 00:15:53.361 17:19:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.361 17:19:23 -- nvmf/common.sh@47 -- # : 0 00:15:53.361 17:19:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.361 17:19:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.361 17:19:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.361 17:19:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.361 17:19:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.361 17:19:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.361 17:19:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.361 17:19:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.361 17:19:23 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.361 17:19:23 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.361 17:19:23 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.361 17:19:23 -- target/fio.sh@16 -- # nvmftestinit 00:15:53.361 17:19:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:53.361 17:19:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.361 17:19:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:53.361 17:19:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:53.361 17:19:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:53.361 17:19:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.361 17:19:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.361 17:19:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.361 17:19:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:53.361 17:19:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:53.361 17:19:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.361 17:19:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.361 17:19:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.361 17:19:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:53.361 17:19:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.361 17:19:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.361 17:19:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.361 17:19:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.361 17:19:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.361 17:19:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.361 17:19:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.361 17:19:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.361 17:19:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.361 17:19:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.361 Cannot find device "nvmf_tgt_br" 00:15:53.361 17:19:23 -- nvmf/common.sh@155 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.361 Cannot find device "nvmf_tgt_br2" 00:15:53.361 17:19:23 -- nvmf/common.sh@156 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.361 17:19:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.361 Cannot find device "nvmf_tgt_br" 00:15:53.361 17:19:23 -- nvmf/common.sh@158 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.361 Cannot find device "nvmf_tgt_br2" 00:15:53.361 17:19:23 -- nvmf/common.sh@159 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.361 17:19:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.361 17:19:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.361 17:19:23 -- nvmf/common.sh@162 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.361 17:19:23 -- nvmf/common.sh@163 -- # true 00:15:53.361 17:19:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.361 17:19:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.361 17:19:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.361 17:19:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.361 17:19:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.361 17:19:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.361 17:19:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.361 17:19:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.361 17:19:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.361 17:19:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:53.361 17:19:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:53.620 17:19:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:53.620 17:19:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:53.620 17:19:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.620 17:19:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.620 17:19:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.620 17:19:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:53.620 17:19:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:53.620 17:19:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.620 17:19:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.620 17:19:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.620 17:19:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.620 17:19:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.620 17:19:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:53.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:53.620 00:15:53.620 --- 10.0.0.2 ping statistics --- 00:15:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.620 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:53.620 17:19:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:53.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:53.620 00:15:53.620 --- 10.0.0.3 ping statistics --- 00:15:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.620 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:53.620 17:19:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:53.620 00:15:53.620 --- 10.0.0.1 ping statistics --- 00:15:53.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.620 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:53.620 17:19:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.620 17:19:23 -- nvmf/common.sh@422 -- # return 0 00:15:53.620 17:19:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:53.620 17:19:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.620 17:19:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:53.620 17:19:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:53.620 17:19:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.620 17:19:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:53.620 17:19:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:53.620 17:19:23 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:53.620 17:19:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:53.620 17:19:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:53.620 17:19:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.620 17:19:23 -- nvmf/common.sh@470 -- # nvmfpid=80015 00:15:53.620 17:19:23 -- nvmf/common.sh@471 -- # waitforlisten 80015 00:15:53.620 17:19:23 -- common/autotest_common.sh@817 -- # '[' -z 80015 ']' 00:15:53.620 17:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.620 17:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.620 17:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.620 17:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.620 17:19:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:53.620 17:19:23 -- common/autotest_common.sh@10 -- # set +x 00:15:53.620 [2024-04-25 17:19:23.524220] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:15:53.620 [2024-04-25 17:19:23.524351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.878 [2024-04-25 17:19:23.655210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.878 [2024-04-25 17:19:23.706870] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.878 [2024-04-25 17:19:23.706936] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.878 [2024-04-25 17:19:23.706945] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.878 [2024-04-25 17:19:23.706952] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.878 [2024-04-25 17:19:23.706958] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.878 [2024-04-25 17:19:23.707072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.878 [2024-04-25 17:19:23.707199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.878 [2024-04-25 17:19:23.707980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.878 [2024-04-25 17:19:23.707986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.939 17:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.939 17:19:24 -- common/autotest_common.sh@850 -- # return 0 00:15:54.939 17:19:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:54.939 17:19:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:54.939 17:19:24 -- common/autotest_common.sh@10 -- # set +x 00:15:54.939 17:19:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.939 17:19:24 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.939 [2024-04-25 17:19:24.669869] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.939 17:19:24 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.199 17:19:24 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:55.199 17:19:24 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.199 17:19:25 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:55.199 17:19:25 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.457 17:19:25 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:55.457 17:19:25 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.025 17:19:25 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:56.025 17:19:25 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:56.025 17:19:25 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.283 17:19:26 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:56.283 17:19:26 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.542 17:19:26 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:56.542 17:19:26 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.800 17:19:26 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:56.800 17:19:26 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:57.058 17:19:26 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.317 17:19:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:57.317 17:19:27 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.576 17:19:27 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:57.576 17:19:27 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.576 17:19:27 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.835 [2024-04-25 17:19:27.728869] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.835 17:19:27 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:58.093 17:19:27 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:58.352 17:19:28 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.352 17:19:28 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:58.352 17:19:28 -- common/autotest_common.sh@1184 -- # local i=0 00:15:58.352 17:19:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.352 17:19:28 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:15:58.352 17:19:28 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:15:58.352 17:19:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:00.882 17:19:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:00.882 17:19:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:00.882 17:19:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.882 17:19:30 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:00.882 17:19:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.882 17:19:30 -- common/autotest_common.sh@1194 -- # return 0 00:16:00.882 17:19:30 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:00.882 [global] 00:16:00.882 thread=1 00:16:00.882 invalidate=1 00:16:00.882 rw=write 00:16:00.882 time_based=1 00:16:00.882 runtime=1 00:16:00.882 ioengine=libaio 00:16:00.882 direct=1 00:16:00.882 bs=4096 00:16:00.882 iodepth=1 00:16:00.882 norandommap=0 00:16:00.882 numjobs=1 00:16:00.882 00:16:00.882 verify_dump=1 00:16:00.882 verify_backlog=512 00:16:00.882 verify_state_save=0 00:16:00.882 do_verify=1 00:16:00.882 verify=crc32c-intel 00:16:00.882 [job0] 00:16:00.882 filename=/dev/nvme0n1 00:16:00.882 [job1] 00:16:00.882 filename=/dev/nvme0n2 00:16:00.882 [job2] 00:16:00.882 filename=/dev/nvme0n3 00:16:00.882 [job3] 00:16:00.882 filename=/dev/nvme0n4 00:16:00.882 Could not set queue depth (nvme0n1) 00:16:00.882 Could not set queue depth (nvme0n2) 00:16:00.882 Could not set queue depth (nvme0n3) 00:16:00.882 Could not set queue depth (nvme0n4) 00:16:00.883 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.883 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.883 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.883 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.883 fio-3.35 00:16:00.883 Starting 4 threads 00:16:01.819 00:16:01.819 job0: (groupid=0, jobs=1): err= 0: pid=80303: Thu Apr 25 17:19:31 2024 00:16:01.819 read: IOPS=2652, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:16:01.819 slat (nsec): min=12262, max=47180, avg=15508.40, stdev=4112.97 00:16:01.819 clat (usec): min=141, max=1806, avg=175.81, stdev=38.64 00:16:01.819 lat (usec): min=154, max=1843, avg=191.32, stdev=39.13 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:01.819 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:16:01.819 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 206], 00:16:01.819 | 99.00th=[ 223], 99.50th=[ 227], 99.90th=[ 486], 99.95th=[ 758], 00:16:01.819 | 99.99th=[ 1811] 00:16:01.819 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:01.819 slat (nsec): min=14592, max=75097, avg=23010.08, stdev=5716.22 00:16:01.819 clat (usec): min=101, max=235, avg=133.78, stdev=15.87 00:16:01.819 lat (usec): min=120, max=268, avg=156.79, stdev=17.13 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:16:01.819 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:16:01.819 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:16:01.819 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 200], 99.95th=[ 219], 00:16:01.819 | 99.99th=[ 235] 00:16:01.819 bw ( KiB/s): min=12288, max=12288, per=26.84%, avg=12288.00, stdev= 0.00, samples=1 00:16:01.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:01.819 lat (usec) : 250=99.90%, 500=0.07%, 1000=0.02% 00:16:01.819 lat (msec) : 2=0.02% 00:16:01.819 cpu : usr=2.10%, sys=8.30%, ctx=5727, majf=0, minf=5 00:16:01.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 issued rwts: total=2655,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.819 job1: (groupid=0, jobs=1): err= 0: pid=80304: Thu Apr 25 17:19:31 2024 00:16:01.819 read: IOPS=2628, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:16:01.819 slat (usec): min=12, max=235, avg=15.93, stdev= 5.91 00:16:01.819 clat (usec): min=106, max=1856, avg=176.77, stdev=40.37 00:16:01.819 lat (usec): min=152, max=1884, avg=192.70, stdev=40.97 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:01.819 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:16:01.819 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 210], 00:16:01.819 | 99.00th=[ 231], 99.50th=[ 293], 99.90th=[ 469], 99.95th=[ 578], 00:16:01.819 | 99.99th=[ 1860] 00:16:01.819 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:01.819 slat (nsec): min=18150, max=60764, avg=22978.59, stdev=5287.94 00:16:01.819 clat (usec): min=103, max=561, avg=134.36, stdev=19.56 00:16:01.819 lat (usec): min=123, max=587, avg=157.34, stdev=20.55 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 122], 00:16:01.819 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 135], 00:16:01.819 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 165], 00:16:01.819 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 359], 99.95th=[ 482], 00:16:01.819 | 99.99th=[ 562] 00:16:01.819 bw ( KiB/s): min=12288, max=12288, per=26.84%, avg=12288.00, stdev= 0.00, samples=1 00:16:01.819 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:01.819 lat (usec) : 250=99.60%, 500=0.35%, 750=0.04% 00:16:01.819 lat (msec) : 2=0.02% 00:16:01.819 cpu : usr=2.10%, sys=8.30%, ctx=5704, majf=0, minf=16 00:16:01.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 issued rwts: total=2631,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.819 job2: (groupid=0, jobs=1): err= 0: pid=80305: Thu Apr 25 17:19:31 2024 00:16:01.819 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:01.819 slat (nsec): min=12922, max=48793, avg=16426.71, stdev=3872.46 00:16:01.819 clat (usec): min=150, max=1517, avg=187.61, stdev=31.84 00:16:01.819 lat (usec): min=164, max=1535, avg=204.03, stdev=32.03 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:16:01.819 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:16:01.819 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 223], 00:16:01.819 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 265], 99.95th=[ 330], 00:16:01.819 | 99.99th=[ 1516] 00:16:01.819 write: IOPS=2733, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:16:01.819 slat (nsec): min=18682, max=91582, avg=24500.24, stdev=6164.56 00:16:01.819 clat (usec): min=113, max=697, avg=146.57, stdev=20.96 00:16:01.819 lat (usec): min=135, max=720, avg=171.07, stdev=21.75 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:16:01.819 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:16:01.819 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 178], 00:16:01.819 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 474], 99.95th=[ 474], 00:16:01.819 | 99.99th=[ 701] 00:16:01.819 bw ( KiB/s): min= 9592, max=12288, per=23.89%, avg=10940.00, stdev=1906.36, samples=2 00:16:01.819 iops : min= 2398, max= 3072, avg=2735.00, stdev=476.59, samples=2 00:16:01.819 lat (usec) : 250=99.79%, 500=0.17%, 750=0.02% 00:16:01.819 lat (msec) : 2=0.02% 00:16:01.819 cpu : usr=1.70%, sys=8.40%, ctx=5296, majf=0, minf=7 00:16:01.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.819 issued rwts: total=2560,2736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.819 job3: (groupid=0, jobs=1): err= 0: pid=80306: Thu Apr 25 17:19:31 2024 00:16:01.819 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:01.819 slat (nsec): min=12307, max=58084, avg=17352.63, stdev=6253.87 00:16:01.819 clat (usec): min=152, max=3734, avg=195.79, stdev=75.84 00:16:01.819 lat (usec): min=167, max=3758, avg=213.14, stdev=76.44 00:16:01.819 clat percentiles (usec): 00:16:01.819 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:16:01.820 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:16:01.820 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 229], 00:16:01.820 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 668], 99.95th=[ 1205], 00:16:01.820 | 99.99th=[ 3720] 00:16:01.820 write: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec); 0 zone resets 00:16:01.820 slat (nsec): min=18139, max=70712, avg=23879.62, stdev=6682.39 00:16:01.820 clat (usec): min=111, max=364, avg=148.55, stdev=17.23 00:16:01.820 lat (usec): min=130, max=384, avg=172.43, stdev=19.05 00:16:01.820 clat percentiles (usec): 00:16:01.820 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:16:01.820 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:16:01.820 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 174], 95.00th=[ 182], 00:16:01.820 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 219], 99.95th=[ 223], 00:16:01.820 | 99.99th=[ 363] 00:16:01.820 bw ( KiB/s): min=12232, max=12232, per=26.72%, avg=12232.00, stdev= 0.00, samples=1 00:16:01.820 iops : min= 3058, max= 3058, avg=3058.00, stdev= 0.00, samples=1 00:16:01.820 lat (usec) : 250=99.63%, 500=0.31%, 750=0.02% 00:16:01.820 lat (msec) : 2=0.02%, 4=0.02% 00:16:01.820 cpu : usr=1.70%, sys=8.20%, ctx=5138, majf=0, minf=7 00:16:01.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.820 issued rwts: total=2560,2578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:01.820 00:16:01.820 Run status group 0 (all jobs): 00:16:01.820 READ: bw=40.6MiB/s (42.6MB/s), 9.99MiB/s-10.4MiB/s (10.5MB/s-10.9MB/s), io=40.6MiB (42.6MB), run=1001-1001msec 00:16:01.820 WRITE: bw=44.7MiB/s (46.9MB/s), 10.1MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.8MiB (46.9MB), run=1001-1001msec 00:16:01.820 00:16:01.820 Disk stats (read/write): 00:16:01.820 nvme0n1: ios=2400/2560, merge=0/0, ticks=460/383, in_queue=843, util=88.28% 00:16:01.820 nvme0n2: ios=2374/2560, merge=0/0, ticks=443/371, in_queue=814, util=88.97% 00:16:01.820 nvme0n3: ios=2048/2549, merge=0/0, ticks=396/411, in_queue=807, util=89.16% 00:16:01.820 nvme0n4: ios=2048/2408, merge=0/0, ticks=408/389, in_queue=797, util=89.51% 00:16:01.820 17:19:31 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:01.820 [global] 00:16:01.820 thread=1 00:16:01.820 invalidate=1 00:16:01.820 rw=randwrite 00:16:01.820 time_based=1 00:16:01.820 runtime=1 00:16:01.820 ioengine=libaio 00:16:01.820 direct=1 00:16:01.820 bs=4096 00:16:01.820 iodepth=1 00:16:01.820 norandommap=0 00:16:01.820 numjobs=1 00:16:01.820 00:16:01.820 verify_dump=1 00:16:01.820 verify_backlog=512 00:16:01.820 verify_state_save=0 00:16:01.820 do_verify=1 00:16:01.820 verify=crc32c-intel 00:16:01.820 [job0] 00:16:01.820 filename=/dev/nvme0n1 00:16:01.820 [job1] 00:16:01.820 filename=/dev/nvme0n2 00:16:01.820 [job2] 00:16:01.820 filename=/dev/nvme0n3 00:16:01.820 [job3] 00:16:01.820 filename=/dev/nvme0n4 00:16:02.079 Could not set queue depth (nvme0n1) 00:16:02.079 Could not set queue depth (nvme0n2) 00:16:02.079 Could not set queue depth (nvme0n3) 00:16:02.079 Could not set queue depth (nvme0n4) 00:16:02.079 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.079 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.079 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.079 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.079 fio-3.35 00:16:02.079 Starting 4 threads 00:16:03.456 00:16:03.456 job0: (groupid=0, jobs=1): err= 0: pid=80359: Thu Apr 25 17:19:33 2024 00:16:03.456 read: IOPS=2767, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:16:03.456 slat (nsec): min=11715, max=46324, avg=15511.60, stdev=4514.73 00:16:03.456 clat (usec): min=137, max=4437, avg=175.54, stdev=99.57 00:16:03.456 lat (usec): min=151, max=4480, avg=191.05, stdev=100.44 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:16:03.456 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:16:03.456 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 206], 00:16:03.456 | 99.00th=[ 235], 99.50th=[ 330], 99.90th=[ 1762], 99.95th=[ 1811], 00:16:03.456 | 99.99th=[ 4424] 00:16:03.456 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:03.456 slat (usec): min=16, max=101, avg=22.52, stdev= 6.40 00:16:03.456 clat (usec): min=96, max=346, avg=127.61, stdev=16.82 00:16:03.456 lat (usec): min=119, max=370, avg=150.13, stdev=18.54 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:16:03.456 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 125], 60.00th=[ 128], 00:16:03.456 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 159], 00:16:03.456 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 231], 99.95th=[ 343], 00:16:03.456 | 99.99th=[ 347] 00:16:03.456 bw ( KiB/s): min=12288, max=12288, per=31.75%, avg=12288.00, stdev= 0.00, samples=1 00:16:03.456 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:03.456 lat (usec) : 100=0.03%, 250=99.55%, 500=0.33%, 750=0.02% 00:16:03.456 lat (msec) : 2=0.05%, 10=0.02% 00:16:03.456 cpu : usr=1.70%, sys=8.70%, ctx=5842, majf=0, minf=14 00:16:03.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 issued rwts: total=2770,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.456 job1: (groupid=0, jobs=1): err= 0: pid=80360: Thu Apr 25 17:19:33 2024 00:16:03.456 read: IOPS=1756, BW=7025KiB/s (7194kB/s)(7032KiB/1001msec) 00:16:03.456 slat (nsec): min=10289, max=53485, avg=14827.55, stdev=4334.87 00:16:03.456 clat (usec): min=147, max=41092, avg=304.89, stdev=976.24 00:16:03.456 lat (usec): min=161, max=41108, avg=319.72, stdev=976.26 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 231], 00:16:03.456 | 30.00th=[ 253], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 293], 00:16:03.456 | 70.00th=[ 310], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 396], 00:16:03.456 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 1385], 99.95th=[41157], 00:16:03.456 | 99.99th=[41157] 00:16:03.456 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:03.456 slat (nsec): min=10438, max=74589, avg=19730.69, stdev=6491.96 00:16:03.456 clat (usec): min=103, max=355, avg=190.88, stdev=60.11 00:16:03.456 lat (usec): min=126, max=375, avg=210.61, stdev=59.49 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 133], 00:16:03.456 | 30.00th=[ 141], 40.00th=[ 157], 50.00th=[ 178], 60.00th=[ 196], 00:16:03.456 | 70.00th=[ 237], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 293], 00:16:03.456 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 330], 99.95th=[ 338], 00:16:03.456 | 99.99th=[ 355] 00:16:03.456 bw ( KiB/s): min= 8192, max= 8192, per=21.17%, avg=8192.00, stdev= 0.00, samples=1 00:16:03.456 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:03.456 lat (usec) : 250=52.86%, 500=47.06%, 750=0.03% 00:16:03.456 lat (msec) : 2=0.03%, 50=0.03% 00:16:03.456 cpu : usr=1.50%, sys=5.10%, ctx=3806, majf=0, minf=11 00:16:03.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 issued rwts: total=1758,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.456 job2: (groupid=0, jobs=1): err= 0: pid=80361: Thu Apr 25 17:19:33 2024 00:16:03.456 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:03.456 slat (nsec): min=8343, max=81527, avg=14936.90, stdev=5096.21 00:16:03.456 clat (usec): min=218, max=41127, avg=331.33, stdev=1042.68 00:16:03.456 lat (usec): min=235, max=41137, avg=346.27, stdev=1042.57 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:16:03.456 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 306], 00:16:03.456 | 70.00th=[ 330], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 392], 00:16:03.456 | 99.00th=[ 416], 99.50th=[ 420], 99.90th=[ 457], 99.95th=[41157], 00:16:03.456 | 99.99th=[41157] 00:16:03.456 write: IOPS=1859, BW=7437KiB/s (7615kB/s)(7444KiB/1001msec); 0 zone resets 00:16:03.456 slat (nsec): min=10480, max=94316, avg=22847.31, stdev=10047.19 00:16:03.456 clat (usec): min=115, max=2689, avg=225.35, stdev=82.30 00:16:03.456 lat (usec): min=135, max=2724, avg=248.20, stdev=84.80 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 169], 00:16:03.456 | 30.00th=[ 188], 40.00th=[ 212], 50.00th=[ 233], 60.00th=[ 247], 00:16:03.456 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:16:03.456 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 1303], 99.95th=[ 2704], 00:16:03.456 | 99.99th=[ 2704] 00:16:03.456 bw ( KiB/s): min= 8192, max= 8192, per=21.17%, avg=8192.00, stdev= 0.00, samples=1 00:16:03.456 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:03.456 lat (usec) : 250=38.53%, 500=61.29%, 750=0.09% 00:16:03.456 lat (msec) : 2=0.03%, 4=0.03%, 50=0.03% 00:16:03.456 cpu : usr=1.10%, sys=5.30%, ctx=3401, majf=0, minf=9 00:16:03.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 issued rwts: total=1536,1861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.456 job3: (groupid=0, jobs=1): err= 0: pid=80362: Thu Apr 25 17:19:33 2024 00:16:03.456 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:03.456 slat (nsec): min=12344, max=61651, avg=15525.30, stdev=4524.86 00:16:03.456 clat (usec): min=141, max=502, avg=179.85, stdev=19.82 00:16:03.456 lat (usec): min=154, max=516, avg=195.37, stdev=20.31 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:16:03.456 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 182], 00:16:03.456 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:16:03.456 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 416], 00:16:03.456 | 99.99th=[ 502] 00:16:03.456 write: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:16:03.456 slat (usec): min=18, max=171, avg=24.15, stdev= 8.34 00:16:03.456 clat (usec): min=105, max=562, avg=157.01, stdev=48.23 00:16:03.456 lat (usec): min=126, max=596, avg=181.16, stdev=53.10 00:16:03.456 clat percentiles (usec): 00:16:03.456 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 125], 00:16:03.456 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 145], 00:16:03.456 | 70.00th=[ 155], 80.00th=[ 180], 90.00th=[ 247], 95.00th=[ 262], 00:16:03.456 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 330], 99.95th=[ 375], 00:16:03.456 | 99.99th=[ 562] 00:16:03.456 bw ( KiB/s): min=11640, max=11640, per=30.08%, avg=11640.00, stdev= 0.00, samples=1 00:16:03.456 iops : min= 2910, max= 2910, avg=2910.00, stdev= 0.00, samples=1 00:16:03.456 lat (usec) : 250=95.55%, 500=4.41%, 750=0.04% 00:16:03.456 cpu : usr=2.10%, sys=7.70%, ctx=5267, majf=0, minf=11 00:16:03.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.456 issued rwts: total=2560,2704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.456 00:16:03.456 Run status group 0 (all jobs): 00:16:03.456 READ: bw=33.7MiB/s (35.3MB/s), 6138KiB/s-10.8MiB/s (6285kB/s-11.3MB/s), io=33.7MiB (35.3MB), run=1001-1001msec 00:16:03.456 WRITE: bw=37.8MiB/s (39.6MB/s), 7437KiB/s-12.0MiB/s (7615kB/s-12.6MB/s), io=37.8MiB (39.7MB), run=1001-1001msec 00:16:03.456 00:16:03.456 Disk stats (read/write): 00:16:03.456 nvme0n1: ios=2584/2560, merge=0/0, ticks=492/364, in_queue=856, util=89.28% 00:16:03.456 nvme0n2: ios=1585/1819, merge=0/0, ticks=519/339, in_queue=858, util=89.81% 00:16:03.456 nvme0n3: ios=1399/1536, merge=0/0, ticks=496/356, in_queue=852, util=89.57% 00:16:03.456 nvme0n4: ios=2048/2541, merge=0/0, ticks=387/427, in_queue=814, util=89.83% 00:16:03.456 17:19:33 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:03.456 [global] 00:16:03.456 thread=1 00:16:03.456 invalidate=1 00:16:03.456 rw=write 00:16:03.456 time_based=1 00:16:03.456 runtime=1 00:16:03.456 ioengine=libaio 00:16:03.456 direct=1 00:16:03.456 bs=4096 00:16:03.456 iodepth=128 00:16:03.456 norandommap=0 00:16:03.456 numjobs=1 00:16:03.456 00:16:03.456 verify_dump=1 00:16:03.457 verify_backlog=512 00:16:03.457 verify_state_save=0 00:16:03.457 do_verify=1 00:16:03.457 verify=crc32c-intel 00:16:03.457 [job0] 00:16:03.457 filename=/dev/nvme0n1 00:16:03.457 [job1] 00:16:03.457 filename=/dev/nvme0n2 00:16:03.457 [job2] 00:16:03.457 filename=/dev/nvme0n3 00:16:03.457 [job3] 00:16:03.457 filename=/dev/nvme0n4 00:16:03.457 Could not set queue depth (nvme0n1) 00:16:03.457 Could not set queue depth (nvme0n2) 00:16:03.457 Could not set queue depth (nvme0n3) 00:16:03.457 Could not set queue depth (nvme0n4) 00:16:03.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.457 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.457 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.457 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.457 fio-3.35 00:16:03.457 Starting 4 threads 00:16:04.833 00:16:04.833 job0: (groupid=0, jobs=1): err= 0: pid=80421: Thu Apr 25 17:19:34 2024 00:16:04.833 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:16:04.833 slat (usec): min=6, max=20914, avg=206.78, stdev=1328.92 00:16:04.833 clat (usec): min=12921, max=81662, avg=25730.34, stdev=14758.40 00:16:04.833 lat (usec): min=12942, max=81698, avg=25937.12, stdev=14910.80 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[13435], 5.00th=[14877], 10.00th=[15139], 20.00th=[15401], 00:16:04.833 | 30.00th=[15533], 40.00th=[15664], 50.00th=[16188], 60.00th=[19006], 00:16:04.833 | 70.00th=[30278], 80.00th=[39584], 90.00th=[50594], 95.00th=[59507], 00:16:04.833 | 99.00th=[64750], 99.50th=[64750], 99.90th=[71828], 99.95th=[80217], 00:16:04.833 | 99.99th=[81265] 00:16:04.833 write: IOPS=2359, BW=9437KiB/s (9663kB/s)(9512KiB/1008msec); 0 zone resets 00:16:04.833 slat (usec): min=15, max=15803, avg=234.00, stdev=1017.24 00:16:04.833 clat (usec): min=6848, max=66243, avg=30833.50, stdev=13921.43 00:16:04.833 lat (usec): min=7781, max=66302, avg=31067.50, stdev=14019.02 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[10421], 5.00th=[17957], 10.00th=[19268], 20.00th=[20317], 00:16:04.833 | 30.00th=[20579], 40.00th=[21103], 50.00th=[22152], 60.00th=[30016], 00:16:04.833 | 70.00th=[36439], 80.00th=[47449], 90.00th=[51643], 95.00th=[58459], 00:16:04.833 | 99.00th=[62653], 99.50th=[63701], 99.90th=[64750], 99.95th=[65274], 00:16:04.833 | 99.99th=[66323] 00:16:04.833 bw ( KiB/s): min= 8192, max= 9816, per=18.46%, avg=9004.00, stdev=1148.34, samples=2 00:16:04.833 iops : min= 2048, max= 2454, avg=2251.00, stdev=287.09, samples=2 00:16:04.833 lat (msec) : 10=0.38%, 20=36.74%, 50=48.67%, 100=14.21% 00:16:04.833 cpu : usr=2.38%, sys=7.65%, ctx=313, majf=0, minf=15 00:16:04.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.833 issued rwts: total=2048,2378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.833 job1: (groupid=0, jobs=1): err= 0: pid=80422: Thu Apr 25 17:19:34 2024 00:16:04.833 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:16:04.833 slat (usec): min=4, max=9520, avg=211.02, stdev=853.64 00:16:04.833 clat (usec): min=19455, max=43004, avg=25997.59, stdev=3477.65 00:16:04.833 lat (usec): min=19482, max=43025, avg=26208.61, stdev=3566.41 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[20317], 5.00th=[21890], 10.00th=[22938], 20.00th=[23462], 00:16:04.833 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25297], 60.00th=[26084], 00:16:04.833 | 70.00th=[26870], 80.00th=[28181], 90.00th=[30278], 95.00th=[32113], 00:16:04.833 | 99.00th=[39060], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:16:04.833 | 99.99th=[43254] 00:16:04.833 write: IOPS=2156, BW=8625KiB/s (8832kB/s)(8720KiB/1011msec); 0 zone resets 00:16:04.833 slat (usec): min=4, max=9527, avg=252.57, stdev=900.61 00:16:04.833 clat (usec): min=10070, max=60510, avg=33919.54, stdev=11626.37 00:16:04.833 lat (usec): min=11224, max=60530, avg=34172.11, stdev=11725.08 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[14877], 5.00th=[20055], 10.00th=[21103], 20.00th=[22676], 00:16:04.833 | 30.00th=[24511], 40.00th=[25560], 50.00th=[30016], 60.00th=[39584], 00:16:04.833 | 70.00th=[42730], 80.00th=[46400], 90.00th=[50594], 95.00th=[53216], 00:16:04.833 | 99.00th=[56361], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:16:04.833 | 99.99th=[60556] 00:16:04.833 bw ( KiB/s): min= 6696, max= 9740, per=16.85%, avg=8218.00, stdev=2152.43, samples=2 00:16:04.833 iops : min= 1674, max= 2435, avg=2054.50, stdev=538.11, samples=2 00:16:04.833 lat (msec) : 20=3.03%, 50=91.20%, 100=5.77% 00:16:04.833 cpu : usr=2.48%, sys=5.94%, ctx=658, majf=0, minf=11 00:16:04.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.833 issued rwts: total=2048,2180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.833 job2: (groupid=0, jobs=1): err= 0: pid=80424: Thu Apr 25 17:19:34 2024 00:16:04.833 read: IOPS=5512, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1001msec) 00:16:04.833 slat (usec): min=9, max=3376, avg=86.59, stdev=393.90 00:16:04.833 clat (usec): min=594, max=13892, avg=11602.01, stdev=1082.09 00:16:04.833 lat (usec): min=2732, max=13904, avg=11688.59, stdev=1017.67 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[ 6128], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[11469], 00:16:04.833 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:16:04.833 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:16:04.833 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13829], 99.95th=[13829], 00:16:04.833 | 99.99th=[13829] 00:16:04.833 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:16:04.833 slat (usec): min=11, max=2808, avg=85.27, stdev=361.36 00:16:04.833 clat (usec): min=9055, max=13258, avg=11090.36, stdev=1142.55 00:16:04.833 lat (usec): min=9075, max=13481, avg=11175.62, stdev=1137.24 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 00:16:04.833 | 30.00th=[10028], 40.00th=[10290], 50.00th=[11469], 60.00th=[11731], 00:16:04.833 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12387], 95.00th=[12649], 00:16:04.833 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13173], 99.95th=[13304], 00:16:04.833 | 99.99th=[13304] 00:16:04.833 bw ( KiB/s): min=23272, max=23272, per=47.72%, avg=23272.00, stdev= 0.00, samples=1 00:16:04.833 iops : min= 5818, max= 5818, avg=5818.00, stdev= 0.00, samples=1 00:16:04.833 lat (usec) : 750=0.01% 00:16:04.833 lat (msec) : 4=0.29%, 10=16.05%, 20=83.65% 00:16:04.833 cpu : usr=4.80%, sys=14.90%, ctx=608, majf=0, minf=13 00:16:04.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:04.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.833 issued rwts: total=5518,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.833 job3: (groupid=0, jobs=1): err= 0: pid=80425: Thu Apr 25 17:19:34 2024 00:16:04.833 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:16:04.833 slat (usec): min=4, max=7862, avg=206.57, stdev=813.63 00:16:04.833 clat (usec): min=19713, max=44448, avg=26420.18, stdev=3735.57 00:16:04.833 lat (usec): min=19729, max=46289, avg=26626.75, stdev=3802.32 00:16:04.833 clat percentiles (usec): 00:16:04.833 | 1.00th=[20841], 5.00th=[22676], 10.00th=[23200], 20.00th=[23725], 00:16:04.833 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25560], 60.00th=[26346], 00:16:04.833 | 70.00th=[27132], 80.00th=[28705], 90.00th=[30802], 95.00th=[34866], 00:16:04.833 | 99.00th=[38536], 99.50th=[41681], 99.90th=[43254], 99.95th=[44303], 00:16:04.833 | 99.99th=[44303] 00:16:04.833 write: IOPS=2115, BW=8463KiB/s (8666kB/s)(8548KiB/1010msec); 0 zone resets 00:16:04.833 slat (usec): min=6, max=8664, avg=262.19, stdev=890.32 00:16:04.833 clat (usec): min=9034, max=58569, avg=34136.34, stdev=11359.12 00:16:04.833 lat (usec): min=13790, max=58718, avg=34398.53, stdev=11455.05 00:16:04.833 clat percentiles (usec): 00:16:04.834 | 1.00th=[18220], 5.00th=[20841], 10.00th=[21365], 20.00th=[22676], 00:16:04.834 | 30.00th=[24249], 40.00th=[26870], 50.00th=[30540], 60.00th=[40109], 00:16:04.834 | 70.00th=[42730], 80.00th=[45876], 90.00th=[50594], 95.00th=[52167], 00:16:04.834 | 99.00th=[55313], 99.50th=[56361], 99.90th=[56886], 99.95th=[58459], 00:16:04.834 | 99.99th=[58459] 00:16:04.834 bw ( KiB/s): min= 6816, max= 9568, per=16.80%, avg=8192.00, stdev=1945.96, samples=2 00:16:04.834 iops : min= 1704, max= 2392, avg=2048.00, stdev=486.49, samples=2 00:16:04.834 lat (msec) : 10=0.02%, 20=0.93%, 50=93.24%, 100=5.81% 00:16:04.834 cpu : usr=1.88%, sys=6.44%, ctx=681, majf=0, minf=11 00:16:04.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:04.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.834 issued rwts: total=2048,2137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.834 00:16:04.834 Run status group 0 (all jobs): 00:16:04.834 READ: bw=45.1MiB/s (47.2MB/s), 8103KiB/s-21.5MiB/s (8297kB/s-22.6MB/s), io=45.6MiB (47.8MB), run=1001-1011msec 00:16:04.834 WRITE: bw=47.6MiB/s (49.9MB/s), 8463KiB/s-22.0MiB/s (8666kB/s-23.0MB/s), io=48.2MiB (50.5MB), run=1001-1011msec 00:16:04.834 00:16:04.834 Disk stats (read/write): 00:16:04.834 nvme0n1: ios=1795/2048, merge=0/0, ticks=14384/19885, in_queue=34269, util=87.88% 00:16:04.834 nvme0n2: ios=1575/2010, merge=0/0, ticks=12604/20849, in_queue=33453, util=87.45% 00:16:04.834 nvme0n3: ios=4608/4902, merge=0/0, ticks=12314/11608, in_queue=23922, util=89.09% 00:16:04.834 nvme0n4: ios=1536/1981, merge=0/0, ticks=12421/20810, in_queue=33231, util=89.33% 00:16:04.834 17:19:34 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:04.834 [global] 00:16:04.834 thread=1 00:16:04.834 invalidate=1 00:16:04.834 rw=randwrite 00:16:04.834 time_based=1 00:16:04.834 runtime=1 00:16:04.834 ioengine=libaio 00:16:04.834 direct=1 00:16:04.834 bs=4096 00:16:04.834 iodepth=128 00:16:04.834 norandommap=0 00:16:04.834 numjobs=1 00:16:04.834 00:16:04.834 verify_dump=1 00:16:04.834 verify_backlog=512 00:16:04.834 verify_state_save=0 00:16:04.834 do_verify=1 00:16:04.834 verify=crc32c-intel 00:16:04.834 [job0] 00:16:04.834 filename=/dev/nvme0n1 00:16:04.834 [job1] 00:16:04.834 filename=/dev/nvme0n2 00:16:04.834 [job2] 00:16:04.834 filename=/dev/nvme0n3 00:16:04.834 [job3] 00:16:04.834 filename=/dev/nvme0n4 00:16:04.834 Could not set queue depth (nvme0n1) 00:16:04.834 Could not set queue depth (nvme0n2) 00:16:04.834 Could not set queue depth (nvme0n3) 00:16:04.834 Could not set queue depth (nvme0n4) 00:16:04.834 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.834 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.834 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.834 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.834 fio-3.35 00:16:04.834 Starting 4 threads 00:16:06.212 00:16:06.212 job0: (groupid=0, jobs=1): err= 0: pid=80483: Thu Apr 25 17:19:35 2024 00:16:06.212 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:16:06.212 slat (usec): min=4, max=10665, avg=101.92, stdev=630.83 00:16:06.212 clat (usec): min=5235, max=22920, avg=12864.43, stdev=3210.10 00:16:06.212 lat (usec): min=5247, max=22932, avg=12966.35, stdev=3240.06 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[ 5866], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10552], 00:16:06.212 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11731], 60.00th=[12387], 00:16:06.212 | 70.00th=[13698], 80.00th=[14746], 90.00th=[17957], 95.00th=[20055], 00:16:06.212 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22938], 99.95th=[22938], 00:16:06.212 | 99.99th=[22938] 00:16:06.212 write: IOPS=5315, BW=20.8MiB/s (21.8MB/s)(21.0MiB/1011msec); 0 zone resets 00:16:06.212 slat (usec): min=4, max=9527, avg=81.15, stdev=309.67 00:16:06.212 clat (usec): min=4873, max=22848, avg=11575.94, stdev=2628.87 00:16:06.212 lat (usec): min=4895, max=22857, avg=11657.09, stdev=2645.34 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[ 5145], 5.00th=[ 5866], 10.00th=[ 6849], 20.00th=[10159], 00:16:06.212 | 30.00th=[11600], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:16:06.212 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:16:06.212 | 99.00th=[19530], 99.50th=[21627], 99.90th=[22414], 99.95th=[22938], 00:16:06.212 | 99.99th=[22938] 00:16:06.212 bw ( KiB/s): min=20936, max=21082, per=35.16%, avg=21009.00, stdev=103.24, samples=2 00:16:06.212 iops : min= 5234, max= 5270, avg=5252.00, stdev=25.46, samples=2 00:16:06.212 lat (msec) : 10=17.77%, 20=79.17%, 50=3.06% 00:16:06.212 cpu : usr=4.75%, sys=13.07%, ctx=823, majf=0, minf=11 00:16:06.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:06.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.212 issued rwts: total=5120,5374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.212 job1: (groupid=0, jobs=1): err= 0: pid=80484: Thu Apr 25 17:19:35 2024 00:16:06.212 read: IOPS=2429, BW=9718KiB/s (9951kB/s)(9796KiB/1008msec) 00:16:06.212 slat (usec): min=3, max=10475, avg=212.95, stdev=1034.69 00:16:06.212 clat (usec): min=5775, max=40631, avg=25919.83, stdev=4113.10 00:16:06.212 lat (usec): min=11065, max=40652, avg=26132.78, stdev=4199.45 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[11600], 5.00th=[18482], 10.00th=[20055], 20.00th=[24249], 00:16:06.212 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:16:06.212 | 70.00th=[26870], 80.00th=[29492], 90.00th=[31065], 95.00th=[32375], 00:16:06.212 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38536], 99.95th=[39060], 00:16:06.212 | 99.99th=[40633] 00:16:06.212 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:16:06.212 slat (usec): min=5, max=12051, avg=179.63, stdev=925.46 00:16:06.212 clat (usec): min=9744, max=37281, avg=25069.47, stdev=4067.14 00:16:06.212 lat (usec): min=9771, max=37309, avg=25249.10, stdev=4149.98 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[11469], 5.00th=[17695], 10.00th=[19268], 20.00th=[22414], 00:16:06.212 | 30.00th=[23725], 40.00th=[25035], 50.00th=[25560], 60.00th=[26608], 00:16:06.212 | 70.00th=[27395], 80.00th=[27919], 90.00th=[28967], 95.00th=[30802], 00:16:06.212 | 99.00th=[32637], 99.50th=[33424], 99.90th=[36439], 99.95th=[36963], 00:16:06.212 | 99.99th=[37487] 00:16:06.212 bw ( KiB/s): min= 8936, max=11544, per=17.14%, avg=10240.00, stdev=1844.13, samples=2 00:16:06.212 iops : min= 2234, max= 2886, avg=2560.00, stdev=461.03, samples=2 00:16:06.212 lat (msec) : 10=0.12%, 20=9.94%, 50=89.94% 00:16:06.212 cpu : usr=3.08%, sys=6.06%, ctx=650, majf=0, minf=7 00:16:06.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:06.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.212 issued rwts: total=2449,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.212 job2: (groupid=0, jobs=1): err= 0: pid=80485: Thu Apr 25 17:19:35 2024 00:16:06.212 read: IOPS=4372, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1007msec) 00:16:06.212 slat (usec): min=4, max=15342, avg=122.26, stdev=791.13 00:16:06.212 clat (usec): min=3925, max=34833, avg=15249.77, stdev=4282.09 00:16:06.212 lat (usec): min=6291, max=34859, avg=15372.04, stdev=4318.17 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[ 6390], 5.00th=[10814], 10.00th=[11600], 20.00th=[12125], 00:16:06.212 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:16:06.212 | 70.00th=[16319], 80.00th=[17695], 90.00th=[22152], 95.00th=[24249], 00:16:06.212 | 99.00th=[28181], 99.50th=[31327], 99.90th=[34866], 99.95th=[34866], 00:16:06.212 | 99.99th=[34866] 00:16:06.212 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:16:06.212 slat (usec): min=4, max=10232, avg=92.46, stdev=369.15 00:16:06.212 clat (usec): min=3804, max=34770, avg=13099.02, stdev=2878.22 00:16:06.212 lat (usec): min=3831, max=34781, avg=13191.48, stdev=2903.29 00:16:06.212 clat percentiles (usec): 00:16:06.212 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[ 7963], 20.00th=[11207], 00:16:06.212 | 30.00th=[13042], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:16:06.212 | 70.00th=[14615], 80.00th=[14746], 90.00th=[14877], 95.00th=[15139], 00:16:06.212 | 99.00th=[19268], 99.50th=[20317], 99.90th=[26084], 99.95th=[26346], 00:16:06.212 | 99.99th=[34866] 00:16:06.212 bw ( KiB/s): min=17040, max=19863, per=30.88%, avg=18451.50, stdev=1996.16, samples=2 00:16:06.212 iops : min= 4260, max= 4965, avg=4612.50, stdev=498.51, samples=2 00:16:06.212 lat (msec) : 4=0.07%, 10=8.82%, 20=83.63%, 50=7.48% 00:16:06.212 cpu : usr=4.97%, sys=11.13%, ctx=681, majf=0, minf=13 00:16:06.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:06.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.213 issued rwts: total=4403,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.213 job3: (groupid=0, jobs=1): err= 0: pid=80487: Thu Apr 25 17:19:35 2024 00:16:06.213 read: IOPS=2354, BW=9418KiB/s (9644kB/s)(9456KiB/1004msec) 00:16:06.213 slat (usec): min=4, max=10960, avg=200.04, stdev=960.84 00:16:06.213 clat (usec): min=2242, max=35157, avg=25367.84, stdev=3616.62 00:16:06.213 lat (usec): min=5956, max=38607, avg=25567.87, stdev=3682.65 00:16:06.213 clat percentiles (usec): 00:16:06.213 | 1.00th=[12387], 5.00th=[20841], 10.00th=[22414], 20.00th=[24249], 00:16:06.213 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25560], 00:16:06.213 | 70.00th=[26084], 80.00th=[27657], 90.00th=[29754], 95.00th=[31065], 00:16:06.213 | 99.00th=[33162], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:16:06.213 | 99.99th=[35390] 00:16:06.213 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:16:06.213 slat (usec): min=5, max=12641, avg=198.59, stdev=1018.02 00:16:06.213 clat (usec): min=15103, max=38182, avg=25745.00, stdev=2956.81 00:16:06.213 lat (usec): min=15144, max=38207, avg=25943.59, stdev=3051.43 00:16:06.213 clat percentiles (usec): 00:16:06.213 | 1.00th=[19530], 5.00th=[20055], 10.00th=[21890], 20.00th=[23725], 00:16:06.213 | 30.00th=[24249], 40.00th=[25035], 50.00th=[25560], 60.00th=[26608], 00:16:06.213 | 70.00th=[27395], 80.00th=[27919], 90.00th=[28705], 95.00th=[29754], 00:16:06.213 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:16:06.213 | 99.99th=[38011] 00:16:06.213 bw ( KiB/s): min= 9800, max=10680, per=17.14%, avg=10240.00, stdev=622.25, samples=2 00:16:06.213 iops : min= 2450, max= 2670, avg=2560.00, stdev=155.56, samples=2 00:16:06.213 lat (msec) : 4=0.02%, 10=0.41%, 20=3.74%, 50=95.84% 00:16:06.213 cpu : usr=2.59%, sys=6.98%, ctx=626, majf=0, minf=13 00:16:06.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:06.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.213 issued rwts: total=2364,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.213 00:16:06.213 Run status group 0 (all jobs): 00:16:06.213 READ: bw=55.4MiB/s (58.1MB/s), 9418KiB/s-19.8MiB/s (9644kB/s-20.7MB/s), io=56.0MiB (58.7MB), run=1004-1011msec 00:16:06.213 WRITE: bw=58.3MiB/s (61.2MB/s), 9.92MiB/s-20.8MiB/s (10.4MB/s-21.8MB/s), io=59.0MiB (61.9MB), run=1004-1011msec 00:16:06.213 00:16:06.213 Disk stats (read/write): 00:16:06.213 nvme0n1: ios=4325/4608, merge=0/0, ticks=51760/51476, in_queue=103236, util=88.58% 00:16:06.213 nvme0n2: ios=2091/2300, merge=0/0, ticks=25879/25451, in_queue=51330, util=87.87% 00:16:06.213 nvme0n3: ios=3707/4096, merge=0/0, ticks=52323/51553, in_queue=103876, util=88.97% 00:16:06.213 nvme0n4: ios=2054/2155, merge=0/0, ticks=24779/24936, in_queue=49715, util=88.49% 00:16:06.213 17:19:35 -- target/fio.sh@55 -- # sync 00:16:06.213 17:19:35 -- target/fio.sh@59 -- # fio_pid=80500 00:16:06.213 17:19:35 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:06.213 17:19:35 -- target/fio.sh@61 -- # sleep 3 00:16:06.213 [global] 00:16:06.213 thread=1 00:16:06.213 invalidate=1 00:16:06.213 rw=read 00:16:06.213 time_based=1 00:16:06.213 runtime=10 00:16:06.213 ioengine=libaio 00:16:06.213 direct=1 00:16:06.213 bs=4096 00:16:06.213 iodepth=1 00:16:06.213 norandommap=1 00:16:06.213 numjobs=1 00:16:06.213 00:16:06.213 [job0] 00:16:06.213 filename=/dev/nvme0n1 00:16:06.213 [job1] 00:16:06.213 filename=/dev/nvme0n2 00:16:06.213 [job2] 00:16:06.213 filename=/dev/nvme0n3 00:16:06.213 [job3] 00:16:06.213 filename=/dev/nvme0n4 00:16:06.213 Could not set queue depth (nvme0n1) 00:16:06.213 Could not set queue depth (nvme0n2) 00:16:06.213 Could not set queue depth (nvme0n3) 00:16:06.213 Could not set queue depth (nvme0n4) 00:16:06.213 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.213 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.213 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.213 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.213 fio-3.35 00:16:06.213 Starting 4 threads 00:16:09.497 17:19:38 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:09.497 fio: pid=80543, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.497 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=45244416, buflen=4096 00:16:09.497 17:19:39 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:09.497 fio: pid=80542, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.497 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=41123840, buflen=4096 00:16:09.497 17:19:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.497 17:19:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:09.755 fio: pid=80540, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.755 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=53248000, buflen=4096 00:16:09.755 17:19:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.755 17:19:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:10.014 fio: pid=80541, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:10.014 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=50212864, buflen=4096 00:16:10.014 17:19:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.014 17:19:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:10.014 00:16:10.014 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80540: Thu Apr 25 17:19:39 2024 00:16:10.014 read: IOPS=3828, BW=15.0MiB/s (15.7MB/s)(50.8MiB/3396msec) 00:16:10.014 slat (usec): min=7, max=9870, avg=16.78, stdev=145.17 00:16:10.014 clat (usec): min=132, max=7716, avg=242.89, stdev=96.15 00:16:10.014 lat (usec): min=150, max=10201, avg=259.68, stdev=174.01 00:16:10.014 clat percentiles (usec): 00:16:10.014 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 174], 00:16:10.014 | 30.00th=[ 186], 40.00th=[ 210], 50.00th=[ 265], 60.00th=[ 273], 00:16:10.014 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 326], 00:16:10.014 | 99.00th=[ 363], 99.50th=[ 396], 99.90th=[ 498], 99.95th=[ 742], 00:16:10.014 | 99.99th=[ 3195] 00:16:10.014 bw ( KiB/s): min=13120, max=20376, per=29.64%, avg=15122.67, stdev=3107.30, samples=6 00:16:10.014 iops : min= 3280, max= 5094, avg=3780.67, stdev=776.82, samples=6 00:16:10.014 lat (usec) : 250=45.40%, 500=54.50%, 750=0.05% 00:16:10.014 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:16:10.014 cpu : usr=1.35%, sys=4.48%, ctx=13009, majf=0, minf=1 00:16:10.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 issued rwts: total=13001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.014 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80541: Thu Apr 25 17:19:39 2024 00:16:10.014 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(47.9MiB/3634msec) 00:16:10.014 slat (usec): min=10, max=12523, avg=22.80, stdev=214.18 00:16:10.014 clat (usec): min=121, max=3228, avg=271.61, stdev=75.36 00:16:10.014 lat (usec): min=142, max=12764, avg=294.41, stdev=226.46 00:16:10.014 clat percentiles (usec): 00:16:10.014 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 167], 20.00th=[ 253], 00:16:10.014 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 289], 00:16:10.014 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 00:16:10.014 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 578], 99.95th=[ 1778], 00:16:10.014 | 99.99th=[ 2999] 00:16:10.014 bw ( KiB/s): min=12064, max=16736, per=26.23%, avg=13381.71, stdev=1528.37, samples=7 00:16:10.014 iops : min= 3016, max= 4184, avg=3345.43, stdev=382.09, samples=7 00:16:10.014 lat (usec) : 250=18.84%, 500=80.96%, 750=0.10%, 1000=0.02% 00:16:10.014 lat (msec) : 2=0.04%, 4=0.03% 00:16:10.014 cpu : usr=1.60%, sys=4.87%, ctx=12283, majf=0, minf=1 00:16:10.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 issued rwts: total=12260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.014 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80542: Thu Apr 25 17:19:39 2024 00:16:10.014 read: IOPS=3174, BW=12.4MiB/s (13.0MB/s)(39.2MiB/3163msec) 00:16:10.014 slat (usec): min=12, max=9777, avg=17.10, stdev=122.86 00:16:10.014 clat (usec): min=142, max=7218, avg=296.39, stdev=113.67 00:16:10.014 lat (usec): min=156, max=10050, avg=313.49, stdev=167.18 00:16:10.014 clat percentiles (usec): 00:16:10.014 | 1.00th=[ 182], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:16:10.014 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:16:10.014 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 338], 00:16:10.014 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 930], 99.95th=[ 1893], 00:16:10.014 | 99.99th=[ 7177] 00:16:10.014 bw ( KiB/s): min=12016, max=13216, per=24.99%, avg=12748.00, stdev=565.50, samples=6 00:16:10.014 iops : min= 3004, max= 3304, avg=3187.00, stdev=141.37, samples=6 00:16:10.014 lat (usec) : 250=1.86%, 500=97.92%, 750=0.08%, 1000=0.03% 00:16:10.014 lat (msec) : 2=0.06%, 4=0.02%, 10=0.02% 00:16:10.014 cpu : usr=0.73%, sys=4.11%, ctx=10045, majf=0, minf=1 00:16:10.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 issued rwts: total=10041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.014 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=80543: Thu Apr 25 17:19:39 2024 00:16:10.014 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(43.1MiB/2938msec) 00:16:10.014 slat (nsec): min=8450, max=65703, avg=14684.68, stdev=4554.12 00:16:10.014 clat (usec): min=148, max=1794, avg=249.72, stdev=62.90 00:16:10.014 lat (usec): min=163, max=1810, avg=264.40, stdev=61.61 00:16:10.014 clat percentiles (usec): 00:16:10.014 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 176], 00:16:10.014 | 30.00th=[ 192], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:16:10.014 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:16:10.014 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 453], 99.95th=[ 523], 00:16:10.014 | 99.99th=[ 1532] 00:16:10.014 bw ( KiB/s): min=13128, max=20224, per=30.24%, avg=15428.80, stdev=3256.33, samples=5 00:16:10.014 iops : min= 3282, max= 5056, avg=3857.20, stdev=814.08, samples=5 00:16:10.014 lat (usec) : 250=37.11%, 500=62.82%, 750=0.03%, 1000=0.01% 00:16:10.014 lat (msec) : 2=0.02% 00:16:10.014 cpu : usr=1.06%, sys=4.73%, ctx=11047, majf=0, minf=1 00:16:10.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.014 issued rwts: total=11047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.014 00:16:10.014 Run status group 0 (all jobs): 00:16:10.014 READ: bw=49.8MiB/s (52.2MB/s), 12.4MiB/s-15.0MiB/s (13.0MB/s-15.7MB/s), io=181MiB (190MB), run=2938-3634msec 00:16:10.014 00:16:10.014 Disk stats (read/write): 00:16:10.014 nvme0n1: ios=12903/0, merge=0/0, ticks=3118/0, in_queue=3118, util=95.51% 00:16:10.014 nvme0n2: ios=12186/0, merge=0/0, ticks=3385/0, in_queue=3385, util=95.40% 00:16:10.014 nvme0n3: ios=9909/0, merge=0/0, ticks=2961/0, in_queue=2961, util=96.09% 00:16:10.014 nvme0n4: ios=10835/0, merge=0/0, ticks=2707/0, in_queue=2707, util=96.80% 00:16:10.272 17:19:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.272 17:19:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:10.531 17:19:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.531 17:19:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:10.789 17:19:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.789 17:19:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:11.048 17:19:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:11.048 17:19:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:11.306 17:19:41 -- target/fio.sh@69 -- # fio_status=0 00:16:11.306 17:19:41 -- target/fio.sh@70 -- # wait 80500 00:16:11.306 17:19:41 -- target/fio.sh@70 -- # fio_status=4 00:16:11.306 17:19:41 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.306 17:19:41 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.306 17:19:41 -- common/autotest_common.sh@1205 -- # local i=0 00:16:11.306 17:19:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:11.306 17:19:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.306 17:19:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:11.306 17:19:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.306 17:19:41 -- common/autotest_common.sh@1217 -- # return 0 00:16:11.306 17:19:41 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:11.306 nvmf hotplug test: fio failed as expected 00:16:11.306 17:19:41 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:11.306 17:19:41 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.306 17:19:41 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:11.565 17:19:41 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:11.565 17:19:41 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:11.565 17:19:41 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:11.565 17:19:41 -- target/fio.sh@91 -- # nvmftestfini 00:16:11.565 17:19:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:11.565 17:19:41 -- nvmf/common.sh@117 -- # sync 00:16:11.565 17:19:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:11.565 17:19:41 -- nvmf/common.sh@120 -- # set +e 00:16:11.565 17:19:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.565 17:19:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:11.565 rmmod nvme_tcp 00:16:11.565 rmmod nvme_fabrics 00:16:11.565 rmmod nvme_keyring 00:16:11.565 17:19:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.565 17:19:41 -- nvmf/common.sh@124 -- # set -e 00:16:11.565 17:19:41 -- nvmf/common.sh@125 -- # return 0 00:16:11.565 17:19:41 -- nvmf/common.sh@478 -- # '[' -n 80015 ']' 00:16:11.565 17:19:41 -- nvmf/common.sh@479 -- # killprocess 80015 00:16:11.565 17:19:41 -- common/autotest_common.sh@936 -- # '[' -z 80015 ']' 00:16:11.565 17:19:41 -- common/autotest_common.sh@940 -- # kill -0 80015 00:16:11.565 17:19:41 -- common/autotest_common.sh@941 -- # uname 00:16:11.565 17:19:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.565 17:19:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80015 00:16:11.565 17:19:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.565 17:19:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.565 killing process with pid 80015 00:16:11.565 17:19:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80015' 00:16:11.565 17:19:41 -- common/autotest_common.sh@955 -- # kill 80015 00:16:11.565 17:19:41 -- common/autotest_common.sh@960 -- # wait 80015 00:16:11.823 17:19:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:11.823 17:19:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:11.823 17:19:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:11.823 17:19:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.823 17:19:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.823 17:19:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.823 17:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.823 17:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.823 17:19:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:11.823 00:16:11.823 real 0m18.582s 00:16:11.824 user 1m9.824s 00:16:11.824 sys 0m9.052s 00:16:11.824 17:19:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.824 17:19:41 -- common/autotest_common.sh@10 -- # set +x 00:16:11.824 ************************************ 00:16:11.824 END TEST nvmf_fio_target 00:16:11.824 ************************************ 00:16:11.824 17:19:41 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:11.824 17:19:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.824 17:19:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.824 17:19:41 -- common/autotest_common.sh@10 -- # set +x 00:16:11.824 ************************************ 00:16:11.824 START TEST nvmf_bdevio 00:16:11.824 ************************************ 00:16:11.824 17:19:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:11.824 * Looking for test storage... 00:16:11.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:11.824 17:19:41 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.824 17:19:41 -- nvmf/common.sh@7 -- # uname -s 00:16:11.824 17:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.824 17:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.824 17:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.824 17:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.824 17:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.824 17:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.824 17:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.824 17:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.824 17:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.824 17:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.083 17:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:12.083 17:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:12.083 17:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.083 17:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.083 17:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.083 17:19:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.083 17:19:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.083 17:19:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.083 17:19:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.083 17:19:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.083 17:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.083 17:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.083 17:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.083 17:19:41 -- paths/export.sh@5 -- # export PATH 00:16:12.083 17:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.083 17:19:41 -- nvmf/common.sh@47 -- # : 0 00:16:12.083 17:19:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.083 17:19:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.083 17:19:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.083 17:19:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.083 17:19:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.083 17:19:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.083 17:19:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.083 17:19:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.083 17:19:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.083 17:19:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.083 17:19:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:12.083 17:19:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:12.083 17:19:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.083 17:19:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:12.083 17:19:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:12.083 17:19:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:12.083 17:19:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.083 17:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.083 17:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.083 17:19:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:12.084 17:19:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:12.084 17:19:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:12.084 17:19:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:12.084 17:19:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:12.084 17:19:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:12.084 17:19:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.084 17:19:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.084 17:19:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.084 17:19:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.084 17:19:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.084 17:19:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.084 17:19:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.084 17:19:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.084 17:19:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.084 17:19:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.084 17:19:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.084 17:19:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.084 17:19:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.084 17:19:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.084 Cannot find device "nvmf_tgt_br" 00:16:12.084 17:19:41 -- nvmf/common.sh@155 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.084 Cannot find device "nvmf_tgt_br2" 00:16:12.084 17:19:41 -- nvmf/common.sh@156 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.084 17:19:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.084 Cannot find device "nvmf_tgt_br" 00:16:12.084 17:19:41 -- nvmf/common.sh@158 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.084 Cannot find device "nvmf_tgt_br2" 00:16:12.084 17:19:41 -- nvmf/common.sh@159 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:12.084 17:19:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:12.084 17:19:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.084 17:19:41 -- nvmf/common.sh@162 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.084 17:19:41 -- nvmf/common.sh@163 -- # true 00:16:12.084 17:19:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.084 17:19:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.084 17:19:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.084 17:19:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.084 17:19:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.084 17:19:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.084 17:19:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.084 17:19:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.084 17:19:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.084 17:19:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:12.084 17:19:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:12.084 17:19:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:12.084 17:19:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:12.084 17:19:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.084 17:19:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.084 17:19:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.343 17:19:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:12.343 17:19:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:12.343 17:19:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.343 17:19:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.343 17:19:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.343 17:19:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.343 17:19:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.343 17:19:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:12.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:12.343 00:16:12.343 --- 10.0.0.2 ping statistics --- 00:16:12.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.343 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:12.343 17:19:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:12.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:12.343 00:16:12.343 --- 10.0.0.3 ping statistics --- 00:16:12.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.343 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:12.343 17:19:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:12.343 00:16:12.343 --- 10.0.0.1 ping statistics --- 00:16:12.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.343 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:12.343 17:19:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.343 17:19:42 -- nvmf/common.sh@422 -- # return 0 00:16:12.343 17:19:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:12.343 17:19:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.343 17:19:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:12.343 17:19:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:12.343 17:19:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.343 17:19:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:12.343 17:19:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:12.343 17:19:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:12.343 17:19:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:12.343 17:19:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:12.343 17:19:42 -- common/autotest_common.sh@10 -- # set +x 00:16:12.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.343 17:19:42 -- nvmf/common.sh@470 -- # nvmfpid=80873 00:16:12.343 17:19:42 -- nvmf/common.sh@471 -- # waitforlisten 80873 00:16:12.343 17:19:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:12.343 17:19:42 -- common/autotest_common.sh@817 -- # '[' -z 80873 ']' 00:16:12.343 17:19:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.343 17:19:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.343 17:19:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.343 17:19:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.343 17:19:42 -- common/autotest_common.sh@10 -- # set +x 00:16:12.343 [2024-04-25 17:19:42.227698] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:12.343 [2024-04-25 17:19:42.227798] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.602 [2024-04-25 17:19:42.365664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.602 [2024-04-25 17:19:42.413410] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.602 [2024-04-25 17:19:42.413456] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.602 [2024-04-25 17:19:42.413466] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.602 [2024-04-25 17:19:42.413474] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.602 [2024-04-25 17:19:42.413480] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.602 [2024-04-25 17:19:42.413694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:12.602 [2024-04-25 17:19:42.414405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:12.602 [2024-04-25 17:19:42.414564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:12.602 [2024-04-25 17:19:42.414570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.562 17:19:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.562 17:19:43 -- common/autotest_common.sh@850 -- # return 0 00:16:13.562 17:19:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:13.562 17:19:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 17:19:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.562 17:19:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.562 17:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 [2024-04-25 17:19:43.254871] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.562 17:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.562 17:19:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:13.562 17:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 Malloc0 00:16:13.562 17:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.562 17:19:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:13.562 17:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 17:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.562 17:19:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:13.562 17:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 17:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.562 17:19:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.562 17:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.562 17:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:13.562 [2024-04-25 17:19:43.326315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.562 17:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.562 17:19:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:13.562 17:19:43 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:13.562 17:19:43 -- nvmf/common.sh@521 -- # config=() 00:16:13.562 17:19:43 -- nvmf/common.sh@521 -- # local subsystem config 00:16:13.562 17:19:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.562 17:19:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.562 { 00:16:13.562 "params": { 00:16:13.562 "name": "Nvme$subsystem", 00:16:13.562 "trtype": "$TEST_TRANSPORT", 00:16:13.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.562 "adrfam": "ipv4", 00:16:13.562 "trsvcid": "$NVMF_PORT", 00:16:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.562 "hdgst": ${hdgst:-false}, 00:16:13.562 "ddgst": ${ddgst:-false} 00:16:13.562 }, 00:16:13.562 "method": "bdev_nvme_attach_controller" 00:16:13.562 } 00:16:13.562 EOF 00:16:13.562 )") 00:16:13.562 17:19:43 -- nvmf/common.sh@543 -- # cat 00:16:13.562 17:19:43 -- nvmf/common.sh@545 -- # jq . 00:16:13.562 17:19:43 -- nvmf/common.sh@546 -- # IFS=, 00:16:13.562 17:19:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:13.562 "params": { 00:16:13.562 "name": "Nvme1", 00:16:13.562 "trtype": "tcp", 00:16:13.562 "traddr": "10.0.0.2", 00:16:13.562 "adrfam": "ipv4", 00:16:13.562 "trsvcid": "4420", 00:16:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.562 "hdgst": false, 00:16:13.562 "ddgst": false 00:16:13.562 }, 00:16:13.562 "method": "bdev_nvme_attach_controller" 00:16:13.562 }' 00:16:13.562 [2024-04-25 17:19:43.376165] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:13.562 [2024-04-25 17:19:43.376244] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80933 ] 00:16:13.562 [2024-04-25 17:19:43.511927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:13.821 [2024-04-25 17:19:43.582022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.821 [2024-04-25 17:19:43.582161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.821 [2024-04-25 17:19:43.582169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.821 I/O targets: 00:16:13.821 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:13.821 00:16:13.821 00:16:13.821 CUnit - A unit testing framework for C - Version 2.1-3 00:16:13.821 http://cunit.sourceforge.net/ 00:16:13.821 00:16:13.821 00:16:13.821 Suite: bdevio tests on: Nvme1n1 00:16:13.821 Test: blockdev write read block ...passed 00:16:14.079 Test: blockdev write zeroes read block ...passed 00:16:14.079 Test: blockdev write zeroes read no split ...passed 00:16:14.079 Test: blockdev write zeroes read split ...passed 00:16:14.079 Test: blockdev write zeroes read split partial ...passed 00:16:14.079 Test: blockdev reset ...[2024-04-25 17:19:43.843598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.080 [2024-04-25 17:19:43.843695] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1118880 (9): Bad file descriptor 00:16:14.080 [2024-04-25 17:19:43.858587] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.080 passed 00:16:14.080 Test: blockdev write read 8 blocks ...passed 00:16:14.080 Test: blockdev write read size > 128k ...passed 00:16:14.080 Test: blockdev write read invalid size ...passed 00:16:14.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:14.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:14.080 Test: blockdev write read max offset ...passed 00:16:14.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:14.080 Test: blockdev writev readv 8 blocks ...passed 00:16:14.080 Test: blockdev writev readv 30 x 1block ...passed 00:16:14.080 Test: blockdev writev readv block ...passed 00:16:14.080 Test: blockdev writev readv size > 128k ...passed 00:16:14.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:14.080 Test: blockdev comparev and writev ...[2024-04-25 17:19:44.034266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.034562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.034690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.034809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.035209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.035442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.035670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.035929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.036367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.036566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.036830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.037054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.037546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.037760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:14.080 [2024-04-25 17:19:44.037997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:14.080 [2024-04-25 17:19:44.038214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:14.339 passed 00:16:14.339 Test: blockdev nvme passthru rw ...passed 00:16:14.339 Test: blockdev nvme passthru vendor specific ...[2024-04-25 17:19:44.121498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:14.339 [2024-04-25 17:19:44.121812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:14.339 [2024-04-25 17:19:44.122175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:14.339 [2024-04-25 17:19:44.122403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:14.339 [2024-04-25 17:19:44.122752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:14.339 [2024-04-25 17:19:44.122953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:14.339 [2024-04-25 17:19:44.123315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:14.339 [2024-04-25 17:19:44.123540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:14.339 passed 00:16:14.339 Test: blockdev nvme admin passthru ...passed 00:16:14.339 Test: blockdev copy ...passed 00:16:14.339 00:16:14.339 Run Summary: Type Total Ran Passed Failed Inactive 00:16:14.339 suites 1 1 n/a 0 0 00:16:14.339 tests 23 23 23 0 0 00:16:14.339 asserts 152 152 152 0 n/a 00:16:14.339 00:16:14.339 Elapsed time = 0.906 seconds 00:16:14.598 17:19:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.598 17:19:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.598 17:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.598 17:19:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.598 17:19:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:14.598 17:19:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:14.598 17:19:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:14.598 17:19:44 -- nvmf/common.sh@117 -- # sync 00:16:14.598 17:19:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.598 17:19:44 -- nvmf/common.sh@120 -- # set +e 00:16:14.598 17:19:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.598 17:19:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.598 rmmod nvme_tcp 00:16:14.598 rmmod nvme_fabrics 00:16:14.598 rmmod nvme_keyring 00:16:14.598 17:19:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.598 17:19:44 -- nvmf/common.sh@124 -- # set -e 00:16:14.598 17:19:44 -- nvmf/common.sh@125 -- # return 0 00:16:14.598 17:19:44 -- nvmf/common.sh@478 -- # '[' -n 80873 ']' 00:16:14.598 17:19:44 -- nvmf/common.sh@479 -- # killprocess 80873 00:16:14.598 17:19:44 -- common/autotest_common.sh@936 -- # '[' -z 80873 ']' 00:16:14.598 17:19:44 -- common/autotest_common.sh@940 -- # kill -0 80873 00:16:14.598 17:19:44 -- common/autotest_common.sh@941 -- # uname 00:16:14.598 17:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.598 17:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80873 00:16:14.598 killing process with pid 80873 00:16:14.598 17:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:14.598 17:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:14.598 17:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80873' 00:16:14.598 17:19:44 -- common/autotest_common.sh@955 -- # kill 80873 00:16:14.598 17:19:44 -- common/autotest_common.sh@960 -- # wait 80873 00:16:14.857 17:19:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:14.857 17:19:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:14.857 17:19:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:14.857 17:19:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.857 17:19:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.857 17:19:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.857 17:19:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.857 17:19:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.857 17:19:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:14.857 00:16:14.857 real 0m3.000s 00:16:14.857 user 0m10.836s 00:16:14.857 sys 0m0.692s 00:16:14.857 17:19:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.857 17:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.857 ************************************ 00:16:14.857 END TEST nvmf_bdevio 00:16:14.857 ************************************ 00:16:14.857 17:19:44 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:14.857 17:19:44 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:14.857 17:19:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:14.857 17:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.857 17:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.857 ************************************ 00:16:14.857 START TEST nvmf_bdevio_no_huge 00:16:14.857 ************************************ 00:16:14.857 17:19:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:15.117 * Looking for test storage... 00:16:15.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:15.117 17:19:44 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.117 17:19:44 -- nvmf/common.sh@7 -- # uname -s 00:16:15.117 17:19:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.117 17:19:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.117 17:19:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.117 17:19:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.117 17:19:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.117 17:19:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.117 17:19:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.117 17:19:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.117 17:19:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.117 17:19:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:15.117 17:19:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:15.117 17:19:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.117 17:19:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.117 17:19:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.117 17:19:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.117 17:19:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.117 17:19:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.117 17:19:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.117 17:19:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.117 17:19:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.117 17:19:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.117 17:19:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.117 17:19:44 -- paths/export.sh@5 -- # export PATH 00:16:15.117 17:19:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.117 17:19:44 -- nvmf/common.sh@47 -- # : 0 00:16:15.117 17:19:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.117 17:19:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.117 17:19:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.117 17:19:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.117 17:19:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.117 17:19:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.117 17:19:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.117 17:19:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.117 17:19:44 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.117 17:19:44 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.117 17:19:44 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:15.117 17:19:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:15.117 17:19:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.117 17:19:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:15.117 17:19:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:15.117 17:19:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:15.117 17:19:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.117 17:19:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.117 17:19:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.117 17:19:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:15.117 17:19:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:15.117 17:19:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.117 17:19:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.117 17:19:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.117 17:19:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:15.117 17:19:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.117 17:19:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.117 17:19:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.117 17:19:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.117 17:19:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.117 17:19:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.117 17:19:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.117 17:19:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.117 17:19:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:15.117 17:19:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:15.117 Cannot find device "nvmf_tgt_br" 00:16:15.117 17:19:44 -- nvmf/common.sh@155 -- # true 00:16:15.117 17:19:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.117 Cannot find device "nvmf_tgt_br2" 00:16:15.117 17:19:44 -- nvmf/common.sh@156 -- # true 00:16:15.117 17:19:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:15.117 17:19:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:15.117 Cannot find device "nvmf_tgt_br" 00:16:15.117 17:19:44 -- nvmf/common.sh@158 -- # true 00:16:15.117 17:19:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:15.117 Cannot find device "nvmf_tgt_br2" 00:16:15.117 17:19:44 -- nvmf/common.sh@159 -- # true 00:16:15.117 17:19:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:15.117 17:19:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:15.117 17:19:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.118 17:19:45 -- nvmf/common.sh@162 -- # true 00:16:15.118 17:19:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.118 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.118 17:19:45 -- nvmf/common.sh@163 -- # true 00:16:15.118 17:19:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.118 17:19:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.118 17:19:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.118 17:19:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.118 17:19:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.376 17:19:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.376 17:19:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.376 17:19:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.376 17:19:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.376 17:19:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:15.376 17:19:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:15.376 17:19:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:15.377 17:19:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:15.377 17:19:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.377 17:19:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.377 17:19:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.377 17:19:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:15.377 17:19:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:15.377 17:19:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.377 17:19:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.377 17:19:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.377 17:19:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.377 17:19:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.377 17:19:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:15.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:16:15.377 00:16:15.377 --- 10.0.0.2 ping statistics --- 00:16:15.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.377 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:15.377 17:19:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:15.377 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.377 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:15.377 00:16:15.377 --- 10.0.0.3 ping statistics --- 00:16:15.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.377 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:15.377 17:19:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:15.377 00:16:15.377 --- 10.0.0.1 ping statistics --- 00:16:15.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.377 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:15.377 17:19:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.377 17:19:45 -- nvmf/common.sh@422 -- # return 0 00:16:15.377 17:19:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:15.377 17:19:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.377 17:19:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:15.377 17:19:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:15.377 17:19:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.377 17:19:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:15.377 17:19:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:15.377 17:19:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:15.377 17:19:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:15.377 17:19:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:15.377 17:19:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.377 17:19:45 -- nvmf/common.sh@470 -- # nvmfpid=81111 00:16:15.377 17:19:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:15.377 17:19:45 -- nvmf/common.sh@471 -- # waitforlisten 81111 00:16:15.377 17:19:45 -- common/autotest_common.sh@817 -- # '[' -z 81111 ']' 00:16:15.377 17:19:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.377 17:19:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:15.377 17:19:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.377 17:19:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:15.377 17:19:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.377 [2024-04-25 17:19:45.319586] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:15.377 [2024-04-25 17:19:45.319666] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:15.636 [2024-04-25 17:19:45.455439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.636 [2024-04-25 17:19:45.545747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.636 [2024-04-25 17:19:45.545796] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.636 [2024-04-25 17:19:45.545805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.636 [2024-04-25 17:19:45.545812] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.636 [2024-04-25 17:19:45.545817] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.636 [2024-04-25 17:19:45.545955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:15.636 [2024-04-25 17:19:45.546481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:15.636 [2024-04-25 17:19:45.546594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:15.636 [2024-04-25 17:19:45.546602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.577 17:19:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.577 17:19:46 -- common/autotest_common.sh@850 -- # return 0 00:16:16.577 17:19:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:16.577 17:19:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:16.577 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 17:19:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.577 17:19:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.577 17:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.577 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 [2024-04-25 17:19:46.305799] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.577 17:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.577 17:19:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:16.577 17:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.577 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 Malloc0 00:16:16.577 17:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.577 17:19:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:16.577 17:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.577 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 17:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.577 17:19:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.578 17:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.578 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.578 17:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.578 17:19:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.578 17:19:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.578 17:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.578 [2024-04-25 17:19:46.347220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.578 17:19:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.578 17:19:46 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:16.578 17:19:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:16.578 17:19:46 -- nvmf/common.sh@521 -- # config=() 00:16:16.578 17:19:46 -- nvmf/common.sh@521 -- # local subsystem config 00:16:16.578 17:19:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.578 17:19:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.578 { 00:16:16.578 "params": { 00:16:16.578 "name": "Nvme$subsystem", 00:16:16.578 "trtype": "$TEST_TRANSPORT", 00:16:16.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.578 "adrfam": "ipv4", 00:16:16.578 "trsvcid": "$NVMF_PORT", 00:16:16.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.578 "hdgst": ${hdgst:-false}, 00:16:16.578 "ddgst": ${ddgst:-false} 00:16:16.578 }, 00:16:16.578 "method": "bdev_nvme_attach_controller" 00:16:16.578 } 00:16:16.578 EOF 00:16:16.578 )") 00:16:16.578 17:19:46 -- nvmf/common.sh@543 -- # cat 00:16:16.578 17:19:46 -- nvmf/common.sh@545 -- # jq . 00:16:16.578 17:19:46 -- nvmf/common.sh@546 -- # IFS=, 00:16:16.578 17:19:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:16.578 "params": { 00:16:16.578 "name": "Nvme1", 00:16:16.578 "trtype": "tcp", 00:16:16.578 "traddr": "10.0.0.2", 00:16:16.578 "adrfam": "ipv4", 00:16:16.578 "trsvcid": "4420", 00:16:16.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:16.578 "hdgst": false, 00:16:16.578 "ddgst": false 00:16:16.578 }, 00:16:16.578 "method": "bdev_nvme_attach_controller" 00:16:16.578 }' 00:16:16.578 [2024-04-25 17:19:46.413763] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:16.578 [2024-04-25 17:19:46.413849] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81164 ] 00:16:16.841 [2024-04-25 17:19:46.556951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:16.841 [2024-04-25 17:19:46.683535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.841 [2024-04-25 17:19:46.683672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.841 [2024-04-25 17:19:46.683676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.099 I/O targets: 00:16:17.099 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:17.099 00:16:17.099 00:16:17.099 CUnit - A unit testing framework for C - Version 2.1-3 00:16:17.099 http://cunit.sourceforge.net/ 00:16:17.099 00:16:17.099 00:16:17.099 Suite: bdevio tests on: Nvme1n1 00:16:17.099 Test: blockdev write read block ...passed 00:16:17.099 Test: blockdev write zeroes read block ...passed 00:16:17.099 Test: blockdev write zeroes read no split ...passed 00:16:17.099 Test: blockdev write zeroes read split ...passed 00:16:17.099 Test: blockdev write zeroes read split partial ...passed 00:16:17.099 Test: blockdev reset ...[2024-04-25 17:19:46.980645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.099 [2024-04-25 17:19:46.980983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1679470 (9): Bad file descriptor 00:16:17.099 [2024-04-25 17:19:46.997005] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.099 passed 00:16:17.099 Test: blockdev write read 8 blocks ...passed 00:16:17.099 Test: blockdev write read size > 128k ...passed 00:16:17.099 Test: blockdev write read invalid size ...passed 00:16:17.099 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:17.099 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:17.100 Test: blockdev write read max offset ...passed 00:16:17.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:17.359 Test: blockdev writev readv 8 blocks ...passed 00:16:17.359 Test: blockdev writev readv 30 x 1block ...passed 00:16:17.359 Test: blockdev writev readv block ...passed 00:16:17.359 Test: blockdev writev readv size > 128k ...passed 00:16:17.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:17.359 Test: blockdev comparev and writev ...[2024-04-25 17:19:47.173291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.173338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.173357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.173366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.173648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.173664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.173679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.173688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.174017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.174034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.174050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.174059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.174351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.174367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.174382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.359 [2024-04-25 17:19:47.174391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:17.359 passed 00:16:17.359 Test: blockdev nvme passthru rw ...passed 00:16:17.359 Test: blockdev nvme passthru vendor specific ...[2024-04-25 17:19:47.258042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.359 [2024-04-25 17:19:47.258066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.258204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.359 [2024-04-25 17:19:47.258219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.258332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.359 [2024-04-25 17:19:47.258346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:17.359 [2024-04-25 17:19:47.258457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.359 [2024-04-25 17:19:47.258471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:17.359 passed 00:16:17.359 Test: blockdev nvme admin passthru ...passed 00:16:17.359 Test: blockdev copy ...passed 00:16:17.359 00:16:17.359 Run Summary: Type Total Ran Passed Failed Inactive 00:16:17.359 suites 1 1 n/a 0 0 00:16:17.359 tests 23 23 23 0 0 00:16:17.359 asserts 152 152 152 0 n/a 00:16:17.359 00:16:17.359 Elapsed time = 0.936 seconds 00:16:17.926 17:19:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.926 17:19:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.926 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:16:17.926 17:19:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.926 17:19:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:17.926 17:19:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:17.926 17:19:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:17.926 17:19:47 -- nvmf/common.sh@117 -- # sync 00:16:17.926 17:19:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.927 17:19:47 -- nvmf/common.sh@120 -- # set +e 00:16:17.927 17:19:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.927 17:19:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.927 rmmod nvme_tcp 00:16:17.927 rmmod nvme_fabrics 00:16:17.927 rmmod nvme_keyring 00:16:17.927 17:19:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.927 17:19:47 -- nvmf/common.sh@124 -- # set -e 00:16:17.927 17:19:47 -- nvmf/common.sh@125 -- # return 0 00:16:17.927 17:19:47 -- nvmf/common.sh@478 -- # '[' -n 81111 ']' 00:16:17.927 17:19:47 -- nvmf/common.sh@479 -- # killprocess 81111 00:16:17.927 17:19:47 -- common/autotest_common.sh@936 -- # '[' -z 81111 ']' 00:16:17.927 17:19:47 -- common/autotest_common.sh@940 -- # kill -0 81111 00:16:17.927 17:19:47 -- common/autotest_common.sh@941 -- # uname 00:16:17.927 17:19:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.927 17:19:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81111 00:16:17.927 17:19:47 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:17.927 17:19:47 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:17.927 killing process with pid 81111 00:16:17.927 17:19:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81111' 00:16:17.927 17:19:47 -- common/autotest_common.sh@955 -- # kill 81111 00:16:17.927 17:19:47 -- common/autotest_common.sh@960 -- # wait 81111 00:16:18.186 17:19:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:18.186 17:19:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:18.186 17:19:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:18.186 17:19:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.186 17:19:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.186 17:19:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.186 17:19:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.186 17:19:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.446 17:19:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:18.446 00:16:18.446 real 0m3.369s 00:16:18.446 user 0m12.361s 00:16:18.446 sys 0m1.173s 00:16:18.446 17:19:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.446 ************************************ 00:16:18.446 END TEST nvmf_bdevio_no_huge 00:16:18.446 ************************************ 00:16:18.446 17:19:48 -- common/autotest_common.sh@10 -- # set +x 00:16:18.446 17:19:48 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:18.446 17:19:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:18.446 17:19:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.446 17:19:48 -- common/autotest_common.sh@10 -- # set +x 00:16:18.446 ************************************ 00:16:18.446 START TEST nvmf_tls 00:16:18.446 ************************************ 00:16:18.446 17:19:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:18.446 * Looking for test storage... 00:16:18.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.446 17:19:48 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:18.446 17:19:48 -- nvmf/common.sh@7 -- # uname -s 00:16:18.446 17:19:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.446 17:19:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.446 17:19:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.446 17:19:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.447 17:19:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.447 17:19:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.447 17:19:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.447 17:19:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.447 17:19:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.447 17:19:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.447 17:19:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:18.447 17:19:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:16:18.447 17:19:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.447 17:19:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.447 17:19:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:18.447 17:19:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.447 17:19:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:18.447 17:19:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.447 17:19:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.447 17:19:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.447 17:19:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.447 17:19:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.447 17:19:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.447 17:19:48 -- paths/export.sh@5 -- # export PATH 00:16:18.447 17:19:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.447 17:19:48 -- nvmf/common.sh@47 -- # : 0 00:16:18.447 17:19:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:18.447 17:19:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:18.447 17:19:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.447 17:19:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.447 17:19:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.447 17:19:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:18.447 17:19:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:18.447 17:19:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:18.447 17:19:48 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:18.447 17:19:48 -- target/tls.sh@62 -- # nvmftestinit 00:16:18.447 17:19:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:18.447 17:19:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.447 17:19:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:18.447 17:19:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:18.447 17:19:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:18.447 17:19:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.447 17:19:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.447 17:19:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.447 17:19:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:18.447 17:19:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:18.447 17:19:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:18.447 17:19:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:18.447 17:19:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:18.447 17:19:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:18.447 17:19:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.447 17:19:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.447 17:19:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:18.447 17:19:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:18.447 17:19:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:18.447 17:19:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:18.447 17:19:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:18.447 17:19:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.447 17:19:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:18.447 17:19:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:18.447 17:19:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:18.447 17:19:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:18.447 17:19:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:18.707 17:19:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:18.707 Cannot find device "nvmf_tgt_br" 00:16:18.707 17:19:48 -- nvmf/common.sh@155 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:18.707 Cannot find device "nvmf_tgt_br2" 00:16:18.707 17:19:48 -- nvmf/common.sh@156 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:18.707 17:19:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:18.707 Cannot find device "nvmf_tgt_br" 00:16:18.707 17:19:48 -- nvmf/common.sh@158 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:18.707 Cannot find device "nvmf_tgt_br2" 00:16:18.707 17:19:48 -- nvmf/common.sh@159 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:18.707 17:19:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:18.707 17:19:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:18.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.707 17:19:48 -- nvmf/common.sh@162 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:18.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:18.707 17:19:48 -- nvmf/common.sh@163 -- # true 00:16:18.707 17:19:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:18.707 17:19:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:18.707 17:19:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:18.707 17:19:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:18.707 17:19:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:18.707 17:19:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:18.707 17:19:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:18.707 17:19:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:18.707 17:19:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:18.707 17:19:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:18.707 17:19:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:18.707 17:19:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:18.707 17:19:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:18.707 17:19:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:18.707 17:19:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:18.707 17:19:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:18.707 17:19:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:18.707 17:19:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:18.707 17:19:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:18.967 17:19:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:18.967 17:19:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:18.967 17:19:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:18.967 17:19:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:18.967 17:19:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:18.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:18.967 00:16:18.967 --- 10.0.0.2 ping statistics --- 00:16:18.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.967 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:18.967 17:19:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:18.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:18.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:16:18.967 00:16:18.967 --- 10.0.0.3 ping statistics --- 00:16:18.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.967 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:18.967 17:19:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:18.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:18.967 00:16:18.967 --- 10.0.0.1 ping statistics --- 00:16:18.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.967 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:18.967 17:19:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.967 17:19:48 -- nvmf/common.sh@422 -- # return 0 00:16:18.967 17:19:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:18.967 17:19:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.967 17:19:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:18.967 17:19:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:18.967 17:19:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.967 17:19:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:18.967 17:19:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:18.967 17:19:48 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:18.967 17:19:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:18.967 17:19:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:18.967 17:19:48 -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.967 17:19:48 -- nvmf/common.sh@470 -- # nvmfpid=81357 00:16:18.967 17:19:48 -- nvmf/common.sh@471 -- # waitforlisten 81357 00:16:18.967 17:19:48 -- common/autotest_common.sh@817 -- # '[' -z 81357 ']' 00:16:18.967 17:19:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.967 17:19:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.967 17:19:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:18.967 17:19:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.967 17:19:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.967 17:19:48 -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 [2024-04-25 17:19:48.822611] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:18.967 [2024-04-25 17:19:48.822677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.227 [2024-04-25 17:19:48.960592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.227 [2024-04-25 17:19:49.029084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.227 [2024-04-25 17:19:49.029145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.227 [2024-04-25 17:19:49.029160] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.227 [2024-04-25 17:19:49.029170] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.227 [2024-04-25 17:19:49.029179] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.227 [2024-04-25 17:19:49.029230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.795 17:19:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:19.795 17:19:49 -- common/autotest_common.sh@850 -- # return 0 00:16:19.795 17:19:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:19.795 17:19:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:19.795 17:19:49 -- common/autotest_common.sh@10 -- # set +x 00:16:19.795 17:19:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.795 17:19:49 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:19.795 17:19:49 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:20.055 true 00:16:20.055 17:19:49 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:20.055 17:19:49 -- target/tls.sh@73 -- # jq -r .tls_version 00:16:20.314 17:19:50 -- target/tls.sh@73 -- # version=0 00:16:20.314 17:19:50 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:20.314 17:19:50 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:20.574 17:19:50 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:20.574 17:19:50 -- target/tls.sh@81 -- # jq -r .tls_version 00:16:20.833 17:19:50 -- target/tls.sh@81 -- # version=13 00:16:20.833 17:19:50 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:20.833 17:19:50 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:21.093 17:19:50 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:21.093 17:19:50 -- target/tls.sh@89 -- # jq -r .tls_version 00:16:21.352 17:19:51 -- target/tls.sh@89 -- # version=7 00:16:21.352 17:19:51 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:21.352 17:19:51 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:21.352 17:19:51 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:21.611 17:19:51 -- target/tls.sh@96 -- # ktls=false 00:16:21.611 17:19:51 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:21.611 17:19:51 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:21.871 17:19:51 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:21.871 17:19:51 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:22.130 17:19:51 -- target/tls.sh@104 -- # ktls=true 00:16:22.130 17:19:51 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:22.130 17:19:51 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:22.130 17:19:52 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:22.130 17:19:52 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:22.390 17:19:52 -- target/tls.sh@112 -- # ktls=false 00:16:22.390 17:19:52 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:22.390 17:19:52 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:22.390 17:19:52 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:22.390 17:19:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # digest=1 00:16:22.390 17:19:52 -- nvmf/common.sh@694 -- # python - 00:16:22.390 17:19:52 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:22.390 17:19:52 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:22.390 17:19:52 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:22.390 17:19:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:16:22.390 17:19:52 -- nvmf/common.sh@693 -- # digest=1 00:16:22.390 17:19:52 -- nvmf/common.sh@694 -- # python - 00:16:22.390 17:19:52 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:22.390 17:19:52 -- target/tls.sh@121 -- # mktemp 00:16:22.650 17:19:52 -- target/tls.sh@121 -- # key_path=/tmp/tmp.ZFn3ATX4XS 00:16:22.650 17:19:52 -- target/tls.sh@122 -- # mktemp 00:16:22.650 17:19:52 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.NEdqRm0FHH 00:16:22.650 17:19:52 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:22.650 17:19:52 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:22.650 17:19:52 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ZFn3ATX4XS 00:16:22.650 17:19:52 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NEdqRm0FHH 00:16:22.650 17:19:52 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:22.909 17:19:52 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:23.169 17:19:52 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ZFn3ATX4XS 00:16:23.169 17:19:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.ZFn3ATX4XS 00:16:23.169 17:19:52 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.169 [2024-04-25 17:19:53.061197] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.169 17:19:53 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.428 17:19:53 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:23.686 [2024-04-25 17:19:53.593288] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.686 [2024-04-25 17:19:53.593489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.686 17:19:53 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:23.945 malloc0 00:16:23.945 17:19:53 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.205 17:19:54 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZFn3ATX4XS 00:16:24.464 [2024-04-25 17:19:54.247198] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:24.464 17:19:54 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZFn3ATX4XS 00:16:34.491 Initializing NVMe Controllers 00:16:34.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:34.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:34.491 Initialization complete. Launching workers. 00:16:34.491 ======================================================== 00:16:34.491 Latency(us) 00:16:34.491 Device Information : IOPS MiB/s Average min max 00:16:34.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11196.40 43.74 5717.16 1449.75 8717.68 00:16:34.491 ======================================================== 00:16:34.491 Total : 11196.40 43.74 5717.16 1449.75 8717.68 00:16:34.491 00:16:34.491 17:20:04 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZFn3ATX4XS 00:16:34.491 17:20:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:34.491 17:20:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:34.491 17:20:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:34.491 17:20:04 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZFn3ATX4XS' 00:16:34.491 17:20:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.491 17:20:04 -- target/tls.sh@28 -- # bdevperf_pid=81707 00:16:34.491 17:20:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.491 17:20:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.491 17:20:04 -- target/tls.sh@31 -- # waitforlisten 81707 /var/tmp/bdevperf.sock 00:16:34.491 17:20:04 -- common/autotest_common.sh@817 -- # '[' -z 81707 ']' 00:16:34.491 17:20:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.491 17:20:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.491 17:20:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.491 17:20:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.491 17:20:04 -- common/autotest_common.sh@10 -- # set +x 00:16:34.751 [2024-04-25 17:20:04.516227] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:34.751 [2024-04-25 17:20:04.516352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81707 ] 00:16:34.751 [2024-04-25 17:20:04.656389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.751 [2024-04-25 17:20:04.725030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.688 17:20:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.688 17:20:05 -- common/autotest_common.sh@850 -- # return 0 00:16:35.688 17:20:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZFn3ATX4XS 00:16:35.688 [2024-04-25 17:20:05.632343] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.688 [2024-04-25 17:20:05.632444] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:35.947 TLSTESTn1 00:16:35.947 17:20:05 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:35.947 Running I/O for 10 seconds... 00:16:45.923 00:16:45.923 Latency(us) 00:16:45.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.923 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:45.923 Verification LBA range: start 0x0 length 0x2000 00:16:45.923 TLSTESTn1 : 10.02 4510.73 17.62 0.00 0.00 28325.74 6911.07 19899.11 00:16:45.923 =================================================================================================================== 00:16:45.923 Total : 4510.73 17.62 0.00 0.00 28325.74 6911.07 19899.11 00:16:45.923 0 00:16:45.923 17:20:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.923 17:20:15 -- target/tls.sh@45 -- # killprocess 81707 00:16:45.923 17:20:15 -- common/autotest_common.sh@936 -- # '[' -z 81707 ']' 00:16:45.923 17:20:15 -- common/autotest_common.sh@940 -- # kill -0 81707 00:16:45.923 17:20:15 -- common/autotest_common.sh@941 -- # uname 00:16:45.923 17:20:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.923 17:20:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81707 00:16:45.923 killing process with pid 81707 00:16:45.923 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.923 00:16:45.923 Latency(us) 00:16:45.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.923 =================================================================================================================== 00:16:45.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.923 17:20:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:45.923 17:20:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:45.923 17:20:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81707' 00:16:45.923 17:20:15 -- common/autotest_common.sh@955 -- # kill 81707 00:16:45.923 [2024-04-25 17:20:15.868540] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:45.923 17:20:15 -- common/autotest_common.sh@960 -- # wait 81707 00:16:46.182 17:20:16 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NEdqRm0FHH 00:16:46.182 17:20:16 -- common/autotest_common.sh@638 -- # local es=0 00:16:46.182 17:20:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NEdqRm0FHH 00:16:46.182 17:20:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:46.182 17:20:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.182 17:20:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:46.182 17:20:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.182 17:20:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NEdqRm0FHH 00:16:46.182 17:20:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.182 17:20:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.182 17:20:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.182 17:20:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NEdqRm0FHH' 00:16:46.182 17:20:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.182 17:20:16 -- target/tls.sh@28 -- # bdevperf_pid=81860 00:16:46.182 17:20:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.182 17:20:16 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.182 17:20:16 -- target/tls.sh@31 -- # waitforlisten 81860 /var/tmp/bdevperf.sock 00:16:46.182 17:20:16 -- common/autotest_common.sh@817 -- # '[' -z 81860 ']' 00:16:46.182 17:20:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.182 17:20:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.182 17:20:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.182 17:20:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.182 17:20:16 -- common/autotest_common.sh@10 -- # set +x 00:16:46.182 [2024-04-25 17:20:16.107806] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:46.182 [2024-04-25 17:20:16.107903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81860 ] 00:16:46.442 [2024-04-25 17:20:16.240391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.442 [2024-04-25 17:20:16.293078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.379 17:20:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.379 17:20:17 -- common/autotest_common.sh@850 -- # return 0 00:16:47.379 17:20:17 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NEdqRm0FHH 00:16:47.380 [2024-04-25 17:20:17.188524] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:47.380 [2024-04-25 17:20:17.188681] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:47.380 [2024-04-25 17:20:17.194620] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:47.380 [2024-04-25 17:20:17.195257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabbaf0 (107): Transport endpoint is not connected 00:16:47.380 [2024-04-25 17:20:17.196248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xabbaf0 (9): Bad file descriptor 00:16:47.380 [2024-04-25 17:20:17.197246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:47.380 [2024-04-25 17:20:17.197268] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:47.380 [2024-04-25 17:20:17.197277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:47.380 2024/04/25 17:20:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.NEdqRm0FHH subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:47.380 request: 00:16:47.380 { 00:16:47.380 "method": "bdev_nvme_attach_controller", 00:16:47.380 "params": { 00:16:47.380 "name": "TLSTEST", 00:16:47.380 "trtype": "tcp", 00:16:47.380 "traddr": "10.0.0.2", 00:16:47.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:47.380 "adrfam": "ipv4", 00:16:47.380 "trsvcid": "4420", 00:16:47.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.380 "psk": "/tmp/tmp.NEdqRm0FHH" 00:16:47.380 } 00:16:47.380 } 00:16:47.380 Got JSON-RPC error response 00:16:47.380 GoRPCClient: error on JSON-RPC call 00:16:47.380 17:20:17 -- target/tls.sh@36 -- # killprocess 81860 00:16:47.380 17:20:17 -- common/autotest_common.sh@936 -- # '[' -z 81860 ']' 00:16:47.380 17:20:17 -- common/autotest_common.sh@940 -- # kill -0 81860 00:16:47.380 17:20:17 -- common/autotest_common.sh@941 -- # uname 00:16:47.380 17:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.380 17:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81860 00:16:47.380 17:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:47.380 killing process with pid 81860 00:16:47.380 17:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:47.380 17:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81860' 00:16:47.380 Received shutdown signal, test time was about 10.000000 seconds 00:16:47.380 00:16:47.380 Latency(us) 00:16:47.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.380 =================================================================================================================== 00:16:47.380 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.380 17:20:17 -- common/autotest_common.sh@955 -- # kill 81860 00:16:47.380 [2024-04-25 17:20:17.241139] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:47.380 17:20:17 -- common/autotest_common.sh@960 -- # wait 81860 00:16:47.639 17:20:17 -- target/tls.sh@37 -- # return 1 00:16:47.639 17:20:17 -- common/autotest_common.sh@641 -- # es=1 00:16:47.639 17:20:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:47.639 17:20:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:47.639 17:20:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:47.639 17:20:17 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZFn3ATX4XS 00:16:47.639 17:20:17 -- common/autotest_common.sh@638 -- # local es=0 00:16:47.639 17:20:17 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZFn3ATX4XS 00:16:47.639 17:20:17 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:47.639 17:20:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.639 17:20:17 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:47.639 17:20:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.639 17:20:17 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZFn3ATX4XS 00:16:47.639 17:20:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.639 17:20:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:47.639 17:20:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:47.639 17:20:17 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZFn3ATX4XS' 00:16:47.639 17:20:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.639 17:20:17 -- target/tls.sh@28 -- # bdevperf_pid=81900 00:16:47.639 17:20:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.639 17:20:17 -- target/tls.sh@31 -- # waitforlisten 81900 /var/tmp/bdevperf.sock 00:16:47.639 17:20:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.639 17:20:17 -- common/autotest_common.sh@817 -- # '[' -z 81900 ']' 00:16:47.639 17:20:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.639 17:20:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.639 17:20:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.639 17:20:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.639 17:20:17 -- common/autotest_common.sh@10 -- # set +x 00:16:47.639 [2024-04-25 17:20:17.465525] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:47.639 [2024-04-25 17:20:17.465627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81900 ] 00:16:47.639 [2024-04-25 17:20:17.600603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.898 [2024-04-25 17:20:17.653386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.467 17:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.467 17:20:18 -- common/autotest_common.sh@850 -- # return 0 00:16:48.467 17:20:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ZFn3ATX4XS 00:16:48.727 [2024-04-25 17:20:18.615441] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.727 [2024-04-25 17:20:18.615561] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:48.727 [2024-04-25 17:20:18.624806] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:48.727 [2024-04-25 17:20:18.624875] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:48.727 [2024-04-25 17:20:18.624925] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:48.727 [2024-04-25 17:20:18.625232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6caf0 (107): Transport endpoint is not connected 00:16:48.727 [2024-04-25 17:20:18.626221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6caf0 (9): Bad file descriptor 00:16:48.727 [2024-04-25 17:20:18.627218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:48.727 [2024-04-25 17:20:18.627252] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:48.727 [2024-04-25 17:20:18.627277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:48.727 2024/04/25 17:20:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.ZFn3ATX4XS subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:48.727 request: 00:16:48.727 { 00:16:48.727 "method": "bdev_nvme_attach_controller", 00:16:48.727 "params": { 00:16:48.727 "name": "TLSTEST", 00:16:48.727 "trtype": "tcp", 00:16:48.727 "traddr": "10.0.0.2", 00:16:48.727 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:48.727 "adrfam": "ipv4", 00:16:48.727 "trsvcid": "4420", 00:16:48.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.727 "psk": "/tmp/tmp.ZFn3ATX4XS" 00:16:48.727 } 00:16:48.727 } 00:16:48.727 Got JSON-RPC error response 00:16:48.727 GoRPCClient: error on JSON-RPC call 00:16:48.727 17:20:18 -- target/tls.sh@36 -- # killprocess 81900 00:16:48.727 17:20:18 -- common/autotest_common.sh@936 -- # '[' -z 81900 ']' 00:16:48.727 17:20:18 -- common/autotest_common.sh@940 -- # kill -0 81900 00:16:48.727 17:20:18 -- common/autotest_common.sh@941 -- # uname 00:16:48.727 17:20:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.727 17:20:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81900 00:16:48.727 17:20:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:48.727 17:20:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:48.727 killing process with pid 81900 00:16:48.727 17:20:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81900' 00:16:48.727 17:20:18 -- common/autotest_common.sh@955 -- # kill 81900 00:16:48.727 Received shutdown signal, test time was about 10.000000 seconds 00:16:48.727 00:16:48.727 Latency(us) 00:16:48.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.727 =================================================================================================================== 00:16:48.727 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.727 [2024-04-25 17:20:18.675925] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:48.727 17:20:18 -- common/autotest_common.sh@960 -- # wait 81900 00:16:48.986 17:20:18 -- target/tls.sh@37 -- # return 1 00:16:48.986 17:20:18 -- common/autotest_common.sh@641 -- # es=1 00:16:48.986 17:20:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:48.986 17:20:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:48.986 17:20:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:48.986 17:20:18 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZFn3ATX4XS 00:16:48.986 17:20:18 -- common/autotest_common.sh@638 -- # local es=0 00:16:48.986 17:20:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZFn3ATX4XS 00:16:48.986 17:20:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:48.986 17:20:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.986 17:20:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:48.986 17:20:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.986 17:20:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZFn3ATX4XS 00:16:48.986 17:20:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.987 17:20:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:48.987 17:20:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.987 17:20:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZFn3ATX4XS' 00:16:48.987 17:20:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.987 17:20:18 -- target/tls.sh@28 -- # bdevperf_pid=81940 00:16:48.987 17:20:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.987 17:20:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.987 17:20:18 -- target/tls.sh@31 -- # waitforlisten 81940 /var/tmp/bdevperf.sock 00:16:48.987 17:20:18 -- common/autotest_common.sh@817 -- # '[' -z 81940 ']' 00:16:48.987 17:20:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.987 17:20:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.987 17:20:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.987 17:20:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.987 17:20:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.987 [2024-04-25 17:20:18.897722] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:48.987 [2024-04-25 17:20:18.897831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81940 ] 00:16:49.245 [2024-04-25 17:20:19.034594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.245 [2024-04-25 17:20:19.093595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.183 17:20:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.183 17:20:19 -- common/autotest_common.sh@850 -- # return 0 00:16:50.183 17:20:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZFn3ATX4XS 00:16:50.183 [2024-04-25 17:20:20.102786] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.183 [2024-04-25 17:20:20.102889] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:50.183 [2024-04-25 17:20:20.107760] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:50.183 [2024-04-25 17:20:20.107847] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:50.183 [2024-04-25 17:20:20.107901] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:50.183 [2024-04-25 17:20:20.108509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cbaf0 (107): Transport endpoint is not connected 00:16:50.183 [2024-04-25 17:20:20.109494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cbaf0 (9): Bad file descriptor 00:16:50.183 [2024-04-25 17:20:20.110490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:50.183 [2024-04-25 17:20:20.110527] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:50.183 [2024-04-25 17:20:20.110548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:50.183 2024/04/25 17:20:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.ZFn3ATX4XS subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:50.183 request: 00:16:50.183 { 00:16:50.183 "method": "bdev_nvme_attach_controller", 00:16:50.183 "params": { 00:16:50.183 "name": "TLSTEST", 00:16:50.183 "trtype": "tcp", 00:16:50.183 "traddr": "10.0.0.2", 00:16:50.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.183 "adrfam": "ipv4", 00:16:50.183 "trsvcid": "4420", 00:16:50.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:50.183 "psk": "/tmp/tmp.ZFn3ATX4XS" 00:16:50.183 } 00:16:50.183 } 00:16:50.183 Got JSON-RPC error response 00:16:50.183 GoRPCClient: error on JSON-RPC call 00:16:50.183 17:20:20 -- target/tls.sh@36 -- # killprocess 81940 00:16:50.183 17:20:20 -- common/autotest_common.sh@936 -- # '[' -z 81940 ']' 00:16:50.183 17:20:20 -- common/autotest_common.sh@940 -- # kill -0 81940 00:16:50.183 17:20:20 -- common/autotest_common.sh@941 -- # uname 00:16:50.183 17:20:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.183 17:20:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81940 00:16:50.183 17:20:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:50.183 17:20:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:50.183 killing process with pid 81940 00:16:50.183 17:20:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81940' 00:16:50.183 Received shutdown signal, test time was about 10.000000 seconds 00:16:50.183 00:16:50.183 Latency(us) 00:16:50.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.183 =================================================================================================================== 00:16:50.183 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.183 17:20:20 -- common/autotest_common.sh@955 -- # kill 81940 00:16:50.183 [2024-04-25 17:20:20.154440] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:50.183 17:20:20 -- common/autotest_common.sh@960 -- # wait 81940 00:16:50.442 17:20:20 -- target/tls.sh@37 -- # return 1 00:16:50.442 17:20:20 -- common/autotest_common.sh@641 -- # es=1 00:16:50.442 17:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:50.442 17:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:50.442 17:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:50.442 17:20:20 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:50.442 17:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:16:50.442 17:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:50.442 17:20:20 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:50.442 17:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:50.442 17:20:20 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:50.442 17:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:50.442 17:20:20 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:50.442 17:20:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.442 17:20:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.442 17:20:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.442 17:20:20 -- target/tls.sh@23 -- # psk= 00:16:50.442 17:20:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.442 17:20:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.442 17:20:20 -- target/tls.sh@28 -- # bdevperf_pid=81990 00:16:50.442 17:20:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.442 17:20:20 -- target/tls.sh@31 -- # waitforlisten 81990 /var/tmp/bdevperf.sock 00:16:50.442 17:20:20 -- common/autotest_common.sh@817 -- # '[' -z 81990 ']' 00:16:50.442 17:20:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.442 17:20:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.442 17:20:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.442 17:20:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.442 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.442 [2024-04-25 17:20:20.371012] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:50.442 [2024-04-25 17:20:20.371113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81990 ] 00:16:50.701 [2024-04-25 17:20:20.499864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.701 [2024-04-25 17:20:20.553447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.268 17:20:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.268 17:20:21 -- common/autotest_common.sh@850 -- # return 0 00:16:51.268 17:20:21 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:51.527 [2024-04-25 17:20:21.475219] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:51.527 [2024-04-25 17:20:21.476502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362630 (9): Bad file descriptor 00:16:51.527 [2024-04-25 17:20:21.477496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:51.527 [2024-04-25 17:20:21.477535] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:51.527 [2024-04-25 17:20:21.477545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.527 2024/04/25 17:20:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:51.527 request: 00:16:51.527 { 00:16:51.527 "method": "bdev_nvme_attach_controller", 00:16:51.527 "params": { 00:16:51.527 "name": "TLSTEST", 00:16:51.527 "trtype": "tcp", 00:16:51.527 "traddr": "10.0.0.2", 00:16:51.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.527 "adrfam": "ipv4", 00:16:51.527 "trsvcid": "4420", 00:16:51.527 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:16:51.527 } 00:16:51.527 } 00:16:51.527 Got JSON-RPC error response 00:16:51.527 GoRPCClient: error on JSON-RPC call 00:16:51.527 17:20:21 -- target/tls.sh@36 -- # killprocess 81990 00:16:51.527 17:20:21 -- common/autotest_common.sh@936 -- # '[' -z 81990 ']' 00:16:51.527 17:20:21 -- common/autotest_common.sh@940 -- # kill -0 81990 00:16:51.527 17:20:21 -- common/autotest_common.sh@941 -- # uname 00:16:51.787 17:20:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.787 17:20:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81990 00:16:51.787 17:20:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:51.787 17:20:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:51.787 killing process with pid 81990 00:16:51.787 17:20:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81990' 00:16:51.787 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.787 00:16:51.787 Latency(us) 00:16:51.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.787 =================================================================================================================== 00:16:51.787 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.787 17:20:21 -- common/autotest_common.sh@955 -- # kill 81990 00:16:51.787 17:20:21 -- common/autotest_common.sh@960 -- # wait 81990 00:16:51.787 17:20:21 -- target/tls.sh@37 -- # return 1 00:16:51.787 17:20:21 -- common/autotest_common.sh@641 -- # es=1 00:16:51.787 17:20:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.787 17:20:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.787 17:20:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.787 17:20:21 -- target/tls.sh@158 -- # killprocess 81357 00:16:51.787 17:20:21 -- common/autotest_common.sh@936 -- # '[' -z 81357 ']' 00:16:51.787 17:20:21 -- common/autotest_common.sh@940 -- # kill -0 81357 00:16:51.787 17:20:21 -- common/autotest_common.sh@941 -- # uname 00:16:51.787 17:20:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.787 17:20:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81357 00:16:51.787 17:20:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:51.787 17:20:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:51.787 killing process with pid 81357 00:16:51.787 17:20:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81357' 00:16:51.787 17:20:21 -- common/autotest_common.sh@955 -- # kill 81357 00:16:51.787 [2024-04-25 17:20:21.717934] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:51.787 17:20:21 -- common/autotest_common.sh@960 -- # wait 81357 00:16:52.046 17:20:21 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:52.047 17:20:21 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:52.047 17:20:21 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:52.047 17:20:21 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:52.047 17:20:21 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:52.047 17:20:21 -- nvmf/common.sh@693 -- # digest=2 00:16:52.047 17:20:21 -- nvmf/common.sh@694 -- # python - 00:16:52.047 17:20:21 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:52.047 17:20:21 -- target/tls.sh@160 -- # mktemp 00:16:52.047 17:20:21 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LiZCnHE9db 00:16:52.047 17:20:21 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:52.047 17:20:21 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LiZCnHE9db 00:16:52.047 17:20:21 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:52.047 17:20:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:52.047 17:20:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:52.047 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 17:20:21 -- nvmf/common.sh@470 -- # nvmfpid=82041 00:16:52.047 17:20:21 -- nvmf/common.sh@471 -- # waitforlisten 82041 00:16:52.047 17:20:21 -- common/autotest_common.sh@817 -- # '[' -z 82041 ']' 00:16:52.047 17:20:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.047 17:20:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:52.047 17:20:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.047 17:20:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:52.047 17:20:21 -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 17:20:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:52.047 [2024-04-25 17:20:22.002586] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:52.047 [2024-04-25 17:20:22.002684] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.306 [2024-04-25 17:20:22.137328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.306 [2024-04-25 17:20:22.185896] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.306 [2024-04-25 17:20:22.185945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.306 [2024-04-25 17:20:22.185954] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.306 [2024-04-25 17:20:22.185961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.307 [2024-04-25 17:20:22.185966] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.307 [2024-04-25 17:20:22.185999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.244 17:20:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:53.244 17:20:22 -- common/autotest_common.sh@850 -- # return 0 00:16:53.244 17:20:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:53.244 17:20:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:53.244 17:20:22 -- common/autotest_common.sh@10 -- # set +x 00:16:53.244 17:20:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.244 17:20:22 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:16:53.244 17:20:22 -- target/tls.sh@49 -- # local key=/tmp/tmp.LiZCnHE9db 00:16:53.244 17:20:22 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:53.244 [2024-04-25 17:20:23.177147] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.244 17:20:23 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:53.503 17:20:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:53.760 [2024-04-25 17:20:23.569199] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.760 [2024-04-25 17:20:23.569386] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.760 17:20:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:54.018 malloc0 00:16:54.018 17:20:23 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.277 17:20:24 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:16:54.277 [2024-04-25 17:20:24.219616] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:54.277 17:20:24 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LiZCnHE9db 00:16:54.277 17:20:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.277 17:20:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:54.277 17:20:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.277 17:20:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LiZCnHE9db' 00:16:54.277 17:20:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.277 17:20:24 -- target/tls.sh@28 -- # bdevperf_pid=82144 00:16:54.277 17:20:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.277 17:20:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.277 17:20:24 -- target/tls.sh@31 -- # waitforlisten 82144 /var/tmp/bdevperf.sock 00:16:54.277 17:20:24 -- common/autotest_common.sh@817 -- # '[' -z 82144 ']' 00:16:54.277 17:20:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.277 17:20:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.277 17:20:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.277 17:20:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.277 17:20:24 -- common/autotest_common.sh@10 -- # set +x 00:16:54.535 [2024-04-25 17:20:24.298186] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:16:54.535 [2024-04-25 17:20:24.298278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82144 ] 00:16:54.535 [2024-04-25 17:20:24.439032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.535 [2024-04-25 17:20:24.508007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.500 17:20:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:55.500 17:20:25 -- common/autotest_common.sh@850 -- # return 0 00:16:55.500 17:20:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:16:55.767 [2024-04-25 17:20:25.502963] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.767 [2024-04-25 17:20:25.503129] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:55.767 TLSTESTn1 00:16:55.767 17:20:25 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.767 Running I/O for 10 seconds... 00:17:05.744 00:17:05.744 Latency(us) 00:17:05.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.744 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:05.744 Verification LBA range: start 0x0 length 0x2000 00:17:05.744 TLSTESTn1 : 10.02 4342.26 16.96 0.00 0.00 29423.47 5898.24 19899.11 00:17:05.744 =================================================================================================================== 00:17:05.744 Total : 4342.26 16.96 0.00 0.00 29423.47 5898.24 19899.11 00:17:05.744 0 00:17:05.744 17:20:35 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.744 17:20:35 -- target/tls.sh@45 -- # killprocess 82144 00:17:05.744 17:20:35 -- common/autotest_common.sh@936 -- # '[' -z 82144 ']' 00:17:05.744 17:20:35 -- common/autotest_common.sh@940 -- # kill -0 82144 00:17:05.744 17:20:35 -- common/autotest_common.sh@941 -- # uname 00:17:06.003 17:20:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.003 17:20:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82144 00:17:06.003 17:20:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:06.003 17:20:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:06.003 killing process with pid 82144 00:17:06.003 17:20:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82144' 00:17:06.003 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.003 00:17:06.003 Latency(us) 00:17:06.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.003 =================================================================================================================== 00:17:06.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.003 17:20:35 -- common/autotest_common.sh@955 -- # kill 82144 00:17:06.004 [2024-04-25 17:20:35.743418] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:06.004 17:20:35 -- common/autotest_common.sh@960 -- # wait 82144 00:17:06.004 17:20:35 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LiZCnHE9db 00:17:06.004 17:20:35 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LiZCnHE9db 00:17:06.004 17:20:35 -- common/autotest_common.sh@638 -- # local es=0 00:17:06.004 17:20:35 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LiZCnHE9db 00:17:06.004 17:20:35 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:06.004 17:20:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:06.004 17:20:35 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:06.004 17:20:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:06.004 17:20:35 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LiZCnHE9db 00:17:06.004 17:20:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:06.004 17:20:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:06.004 17:20:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:06.004 17:20:35 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LiZCnHE9db' 00:17:06.004 17:20:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:06.004 17:20:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.004 17:20:35 -- target/tls.sh@28 -- # bdevperf_pid=82291 00:17:06.004 17:20:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.004 17:20:35 -- target/tls.sh@31 -- # waitforlisten 82291 /var/tmp/bdevperf.sock 00:17:06.004 17:20:35 -- common/autotest_common.sh@817 -- # '[' -z 82291 ']' 00:17:06.004 17:20:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.004 17:20:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.004 17:20:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.004 17:20:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.004 17:20:35 -- common/autotest_common.sh@10 -- # set +x 00:17:06.004 [2024-04-25 17:20:35.961685] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:06.004 [2024-04-25 17:20:35.961775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82291 ] 00:17:06.262 [2024-04-25 17:20:36.089786] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.262 [2024-04-25 17:20:36.143124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.199 17:20:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.199 17:20:36 -- common/autotest_common.sh@850 -- # return 0 00:17:07.199 17:20:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:17:07.199 [2024-04-25 17:20:37.126825] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.199 [2024-04-25 17:20:37.126889] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:07.199 [2024-04-25 17:20:37.126900] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LiZCnHE9db 00:17:07.199 2024/04/25 17:20:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.LiZCnHE9db subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:17:07.199 request: 00:17:07.199 { 00:17:07.199 "method": "bdev_nvme_attach_controller", 00:17:07.199 "params": { 00:17:07.199 "name": "TLSTEST", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.199 "psk": "/tmp/tmp.LiZCnHE9db" 00:17:07.199 } 00:17:07.199 } 00:17:07.199 Got JSON-RPC error response 00:17:07.199 GoRPCClient: error on JSON-RPC call 00:17:07.199 17:20:37 -- target/tls.sh@36 -- # killprocess 82291 00:17:07.199 17:20:37 -- common/autotest_common.sh@936 -- # '[' -z 82291 ']' 00:17:07.199 17:20:37 -- common/autotest_common.sh@940 -- # kill -0 82291 00:17:07.199 17:20:37 -- common/autotest_common.sh@941 -- # uname 00:17:07.199 17:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.199 17:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82291 00:17:07.199 17:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:07.199 killing process with pid 82291 00:17:07.199 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.199 00:17:07.199 Latency(us) 00:17:07.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.199 =================================================================================================================== 00:17:07.199 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.199 17:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:07.199 17:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82291' 00:17:07.199 17:20:37 -- common/autotest_common.sh@955 -- # kill 82291 00:17:07.199 17:20:37 -- common/autotest_common.sh@960 -- # wait 82291 00:17:07.459 17:20:37 -- target/tls.sh@37 -- # return 1 00:17:07.459 17:20:37 -- common/autotest_common.sh@641 -- # es=1 00:17:07.459 17:20:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:07.459 17:20:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:07.459 17:20:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:07.459 17:20:37 -- target/tls.sh@174 -- # killprocess 82041 00:17:07.459 17:20:37 -- common/autotest_common.sh@936 -- # '[' -z 82041 ']' 00:17:07.459 17:20:37 -- common/autotest_common.sh@940 -- # kill -0 82041 00:17:07.459 17:20:37 -- common/autotest_common.sh@941 -- # uname 00:17:07.459 17:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.459 17:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82041 00:17:07.459 17:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:07.459 killing process with pid 82041 00:17:07.459 17:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:07.459 17:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82041' 00:17:07.459 17:20:37 -- common/autotest_common.sh@955 -- # kill 82041 00:17:07.459 [2024-04-25 17:20:37.358903] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:07.459 17:20:37 -- common/autotest_common.sh@960 -- # wait 82041 00:17:07.719 17:20:37 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:07.719 17:20:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:07.719 17:20:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.719 17:20:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 17:20:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:07.719 17:20:37 -- nvmf/common.sh@470 -- # nvmfpid=82342 00:17:07.719 17:20:37 -- nvmf/common.sh@471 -- # waitforlisten 82342 00:17:07.719 17:20:37 -- common/autotest_common.sh@817 -- # '[' -z 82342 ']' 00:17:07.719 17:20:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.719 17:20:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.719 17:20:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.719 17:20:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.719 17:20:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.719 [2024-04-25 17:20:37.582420] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:07.719 [2024-04-25 17:20:37.582517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.979 [2024-04-25 17:20:37.706610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.979 [2024-04-25 17:20:37.757744] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.979 [2024-04-25 17:20:37.757784] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.979 [2024-04-25 17:20:37.757809] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.979 [2024-04-25 17:20:37.757816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.979 [2024-04-25 17:20:37.757822] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.979 [2024-04-25 17:20:37.757852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.547 17:20:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:08.547 17:20:38 -- common/autotest_common.sh@850 -- # return 0 00:17:08.547 17:20:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:08.547 17:20:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:08.547 17:20:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.806 17:20:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.806 17:20:38 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:17:08.806 17:20:38 -- common/autotest_common.sh@638 -- # local es=0 00:17:08.806 17:20:38 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:17:08.806 17:20:38 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:08.806 17:20:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.806 17:20:38 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:08.806 17:20:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.806 17:20:38 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:17:08.806 17:20:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.LiZCnHE9db 00:17:08.806 17:20:38 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.806 [2024-04-25 17:20:38.772158] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.065 17:20:38 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:09.065 17:20:38 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:09.325 [2024-04-25 17:20:39.208273] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.325 [2024-04-25 17:20:39.208521] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.325 17:20:39 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:09.583 malloc0 00:17:09.583 17:20:39 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.842 17:20:39 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:17:10.102 [2024-04-25 17:20:39.854959] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:10.102 [2024-04-25 17:20:39.855018] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:10.102 [2024-04-25 17:20:39.855059] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:10.102 2024/04/25 17:20:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.LiZCnHE9db], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:10.102 request: 00:17:10.102 { 00:17:10.102 "method": "nvmf_subsystem_add_host", 00:17:10.102 "params": { 00:17:10.102 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.102 "host": "nqn.2016-06.io.spdk:host1", 00:17:10.102 "psk": "/tmp/tmp.LiZCnHE9db" 00:17:10.102 } 00:17:10.102 } 00:17:10.102 Got JSON-RPC error response 00:17:10.102 GoRPCClient: error on JSON-RPC call 00:17:10.102 17:20:39 -- common/autotest_common.sh@641 -- # es=1 00:17:10.102 17:20:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:10.102 17:20:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:10.103 17:20:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:10.103 17:20:39 -- target/tls.sh@180 -- # killprocess 82342 00:17:10.103 17:20:39 -- common/autotest_common.sh@936 -- # '[' -z 82342 ']' 00:17:10.103 17:20:39 -- common/autotest_common.sh@940 -- # kill -0 82342 00:17:10.103 17:20:39 -- common/autotest_common.sh@941 -- # uname 00:17:10.103 17:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:10.103 17:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82342 00:17:10.103 17:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:10.103 killing process with pid 82342 00:17:10.103 17:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:10.103 17:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82342' 00:17:10.103 17:20:39 -- common/autotest_common.sh@955 -- # kill 82342 00:17:10.103 17:20:39 -- common/autotest_common.sh@960 -- # wait 82342 00:17:10.103 17:20:40 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LiZCnHE9db 00:17:10.364 17:20:40 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:10.364 17:20:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:10.364 17:20:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.364 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.364 17:20:40 -- nvmf/common.sh@470 -- # nvmfpid=82447 00:17:10.364 17:20:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.364 17:20:40 -- nvmf/common.sh@471 -- # waitforlisten 82447 00:17:10.364 17:20:40 -- common/autotest_common.sh@817 -- # '[' -z 82447 ']' 00:17:10.364 17:20:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.364 17:20:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.364 17:20:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.364 17:20:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.364 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.364 [2024-04-25 17:20:40.149437] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:10.364 [2024-04-25 17:20:40.149546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.364 [2024-04-25 17:20:40.288118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.364 [2024-04-25 17:20:40.341537] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.364 [2024-04-25 17:20:40.341620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.364 [2024-04-25 17:20:40.341647] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.364 [2024-04-25 17:20:40.341654] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.365 [2024-04-25 17:20:40.341661] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.365 [2024-04-25 17:20:40.341694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.301 17:20:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.301 17:20:41 -- common/autotest_common.sh@850 -- # return 0 00:17:11.301 17:20:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:11.301 17:20:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.302 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:17:11.302 17:20:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.302 17:20:41 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:17:11.302 17:20:41 -- target/tls.sh@49 -- # local key=/tmp/tmp.LiZCnHE9db 00:17:11.302 17:20:41 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:11.561 [2024-04-25 17:20:41.318480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.561 17:20:41 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:11.819 17:20:41 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:11.819 [2024-04-25 17:20:41.766535] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.819 [2024-04-25 17:20:41.766757] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.819 17:20:41 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:12.078 malloc0 00:17:12.079 17:20:41 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:12.337 17:20:42 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:17:12.597 [2024-04-25 17:20:42.337155] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:12.597 17:20:42 -- target/tls.sh@188 -- # bdevperf_pid=82544 00:17:12.597 17:20:42 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:12.597 17:20:42 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:12.597 17:20:42 -- target/tls.sh@191 -- # waitforlisten 82544 /var/tmp/bdevperf.sock 00:17:12.597 17:20:42 -- common/autotest_common.sh@817 -- # '[' -z 82544 ']' 00:17:12.597 17:20:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.597 17:20:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:12.597 17:20:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.597 17:20:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:12.597 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:17:12.597 [2024-04-25 17:20:42.400637] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:12.597 [2024-04-25 17:20:42.400771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82544 ] 00:17:12.597 [2024-04-25 17:20:42.535938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.856 [2024-04-25 17:20:42.607099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.424 17:20:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.424 17:20:43 -- common/autotest_common.sh@850 -- # return 0 00:17:13.424 17:20:43 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:17:13.683 [2024-04-25 17:20:43.481980] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.683 [2024-04-25 17:20:43.482076] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:13.683 TLSTESTn1 00:17:13.683 17:20:43 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:13.943 17:20:43 -- target/tls.sh@196 -- # tgtconf='{ 00:17:13.943 "subsystems": [ 00:17:13.943 { 00:17:13.943 "subsystem": "keyring", 00:17:13.943 "config": [] 00:17:13.943 }, 00:17:13.943 { 00:17:13.943 "subsystem": "iobuf", 00:17:13.943 "config": [ 00:17:13.943 { 00:17:13.943 "method": "iobuf_set_options", 00:17:13.943 "params": { 00:17:13.943 "large_bufsize": 135168, 00:17:13.943 "large_pool_count": 1024, 00:17:13.943 "small_bufsize": 8192, 00:17:13.943 "small_pool_count": 8192 00:17:13.943 } 00:17:13.943 } 00:17:13.943 ] 00:17:13.943 }, 00:17:13.943 { 00:17:13.943 "subsystem": "sock", 00:17:13.943 "config": [ 00:17:13.943 { 00:17:13.943 "method": "sock_impl_set_options", 00:17:13.943 "params": { 00:17:13.943 "enable_ktls": false, 00:17:13.943 "enable_placement_id": 0, 00:17:13.943 "enable_quickack": false, 00:17:13.943 "enable_recv_pipe": true, 00:17:13.943 "enable_zerocopy_send_client": false, 00:17:13.943 "enable_zerocopy_send_server": true, 00:17:13.943 "impl_name": "posix", 00:17:13.943 "recv_buf_size": 2097152, 00:17:13.943 "send_buf_size": 2097152, 00:17:13.943 "tls_version": 0, 00:17:13.943 "zerocopy_threshold": 0 00:17:13.943 } 00:17:13.943 }, 00:17:13.943 { 00:17:13.943 "method": "sock_impl_set_options", 00:17:13.943 "params": { 00:17:13.943 "enable_ktls": false, 00:17:13.943 "enable_placement_id": 0, 00:17:13.943 "enable_quickack": false, 00:17:13.943 "enable_recv_pipe": true, 00:17:13.943 "enable_zerocopy_send_client": false, 00:17:13.943 "enable_zerocopy_send_server": true, 00:17:13.943 "impl_name": "ssl", 00:17:13.943 "recv_buf_size": 4096, 00:17:13.943 "send_buf_size": 4096, 00:17:13.943 "tls_version": 0, 00:17:13.943 "zerocopy_threshold": 0 00:17:13.943 } 00:17:13.943 } 00:17:13.943 ] 00:17:13.943 }, 00:17:13.943 { 00:17:13.943 "subsystem": "vmd", 00:17:13.943 "config": [] 00:17:13.943 }, 00:17:13.943 { 00:17:13.943 "subsystem": "accel", 00:17:13.943 "config": [ 00:17:13.943 { 00:17:13.944 "method": "accel_set_options", 00:17:13.944 "params": { 00:17:13.944 "buf_count": 2048, 00:17:13.944 "large_cache_size": 16, 00:17:13.944 "sequence_count": 2048, 00:17:13.944 "small_cache_size": 128, 00:17:13.944 "task_count": 2048 00:17:13.944 } 00:17:13.944 } 00:17:13.944 ] 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "subsystem": "bdev", 00:17:13.944 "config": [ 00:17:13.944 { 00:17:13.944 "method": "bdev_set_options", 00:17:13.944 "params": { 00:17:13.944 "bdev_auto_examine": true, 00:17:13.944 "bdev_io_cache_size": 256, 00:17:13.944 "bdev_io_pool_size": 65535, 00:17:13.944 "iobuf_large_cache_size": 16, 00:17:13.944 "iobuf_small_cache_size": 128 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_raid_set_options", 00:17:13.944 "params": { 00:17:13.944 "process_window_size_kb": 1024 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_iscsi_set_options", 00:17:13.944 "params": { 00:17:13.944 "timeout_sec": 30 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_nvme_set_options", 00:17:13.944 "params": { 00:17:13.944 "action_on_timeout": "none", 00:17:13.944 "allow_accel_sequence": false, 00:17:13.944 "arbitration_burst": 0, 00:17:13.944 "bdev_retry_count": 3, 00:17:13.944 "ctrlr_loss_timeout_sec": 0, 00:17:13.944 "delay_cmd_submit": true, 00:17:13.944 "dhchap_dhgroups": [ 00:17:13.944 "null", 00:17:13.944 "ffdhe2048", 00:17:13.944 "ffdhe3072", 00:17:13.944 "ffdhe4096", 00:17:13.944 "ffdhe6144", 00:17:13.944 "ffdhe8192" 00:17:13.944 ], 00:17:13.944 "dhchap_digests": [ 00:17:13.944 "sha256", 00:17:13.944 "sha384", 00:17:13.944 "sha512" 00:17:13.944 ], 00:17:13.944 "disable_auto_failback": false, 00:17:13.944 "fast_io_fail_timeout_sec": 0, 00:17:13.944 "generate_uuids": false, 00:17:13.944 "high_priority_weight": 0, 00:17:13.944 "io_path_stat": false, 00:17:13.944 "io_queue_requests": 0, 00:17:13.944 "keep_alive_timeout_ms": 10000, 00:17:13.944 "low_priority_weight": 0, 00:17:13.944 "medium_priority_weight": 0, 00:17:13.944 "nvme_adminq_poll_period_us": 10000, 00:17:13.944 "nvme_error_stat": false, 00:17:13.944 "nvme_ioq_poll_period_us": 0, 00:17:13.944 "rdma_cm_event_timeout_ms": 0, 00:17:13.944 "rdma_max_cq_size": 0, 00:17:13.944 "rdma_srq_size": 0, 00:17:13.944 "reconnect_delay_sec": 0, 00:17:13.944 "timeout_admin_us": 0, 00:17:13.944 "timeout_us": 0, 00:17:13.944 "transport_ack_timeout": 0, 00:17:13.944 "transport_retry_count": 4, 00:17:13.944 "transport_tos": 0 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_nvme_set_hotplug", 00:17:13.944 "params": { 00:17:13.944 "enable": false, 00:17:13.944 "period_us": 100000 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_malloc_create", 00:17:13.944 "params": { 00:17:13.944 "block_size": 4096, 00:17:13.944 "name": "malloc0", 00:17:13.944 "num_blocks": 8192, 00:17:13.944 "optimal_io_boundary": 0, 00:17:13.944 "physical_block_size": 4096, 00:17:13.944 "uuid": "b456ddff-9dce-4de9-920d-a1b72b92f588" 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "bdev_wait_for_examine" 00:17:13.944 } 00:17:13.944 ] 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "subsystem": "nbd", 00:17:13.944 "config": [] 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "subsystem": "scheduler", 00:17:13.944 "config": [ 00:17:13.944 { 00:17:13.944 "method": "framework_set_scheduler", 00:17:13.944 "params": { 00:17:13.944 "name": "static" 00:17:13.944 } 00:17:13.944 } 00:17:13.944 ] 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "subsystem": "nvmf", 00:17:13.944 "config": [ 00:17:13.944 { 00:17:13.944 "method": "nvmf_set_config", 00:17:13.944 "params": { 00:17:13.944 "admin_cmd_passthru": { 00:17:13.944 "identify_ctrlr": false 00:17:13.944 }, 00:17:13.944 "discovery_filter": "match_any" 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_set_max_subsystems", 00:17:13.944 "params": { 00:17:13.944 "max_subsystems": 1024 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_set_crdt", 00:17:13.944 "params": { 00:17:13.944 "crdt1": 0, 00:17:13.944 "crdt2": 0, 00:17:13.944 "crdt3": 0 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_create_transport", 00:17:13.944 "params": { 00:17:13.944 "abort_timeout_sec": 1, 00:17:13.944 "ack_timeout": 0, 00:17:13.944 "buf_cache_size": 4294967295, 00:17:13.944 "c2h_success": false, 00:17:13.944 "data_wr_pool_size": 0, 00:17:13.944 "dif_insert_or_strip": false, 00:17:13.944 "in_capsule_data_size": 4096, 00:17:13.944 "io_unit_size": 131072, 00:17:13.944 "max_aq_depth": 128, 00:17:13.944 "max_io_qpairs_per_ctrlr": 127, 00:17:13.944 "max_io_size": 131072, 00:17:13.944 "max_queue_depth": 128, 00:17:13.944 "num_shared_buffers": 511, 00:17:13.944 "sock_priority": 0, 00:17:13.944 "trtype": "TCP", 00:17:13.944 "zcopy": false 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_create_subsystem", 00:17:13.944 "params": { 00:17:13.944 "allow_any_host": false, 00:17:13.944 "ana_reporting": false, 00:17:13.944 "max_cntlid": 65519, 00:17:13.944 "max_namespaces": 10, 00:17:13.944 "min_cntlid": 1, 00:17:13.944 "model_number": "SPDK bdev Controller", 00:17:13.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.944 "serial_number": "SPDK00000000000001" 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_subsystem_add_host", 00:17:13.944 "params": { 00:17:13.944 "host": "nqn.2016-06.io.spdk:host1", 00:17:13.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.944 "psk": "/tmp/tmp.LiZCnHE9db" 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_subsystem_add_ns", 00:17:13.944 "params": { 00:17:13.944 "namespace": { 00:17:13.944 "bdev_name": "malloc0", 00:17:13.944 "nguid": "B456DDFF9DCE4DE9920DA1B72B92F588", 00:17:13.944 "no_auto_visible": false, 00:17:13.944 "nsid": 1, 00:17:13.944 "uuid": "b456ddff-9dce-4de9-920d-a1b72b92f588" 00:17:13.944 }, 00:17:13.944 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:13.944 } 00:17:13.944 }, 00:17:13.944 { 00:17:13.944 "method": "nvmf_subsystem_add_listener", 00:17:13.944 "params": { 00:17:13.944 "listen_address": { 00:17:13.944 "adrfam": "IPv4", 00:17:13.944 "traddr": "10.0.0.2", 00:17:13.944 "trsvcid": "4420", 00:17:13.944 "trtype": "TCP" 00:17:13.944 }, 00:17:13.944 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.944 "secure_channel": true 00:17:13.944 } 00:17:13.944 } 00:17:13.944 ] 00:17:13.944 } 00:17:13.944 ] 00:17:13.944 }' 00:17:13.944 17:20:43 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:14.513 17:20:44 -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:14.513 "subsystems": [ 00:17:14.513 { 00:17:14.513 "subsystem": "keyring", 00:17:14.513 "config": [] 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "subsystem": "iobuf", 00:17:14.513 "config": [ 00:17:14.513 { 00:17:14.513 "method": "iobuf_set_options", 00:17:14.513 "params": { 00:17:14.513 "large_bufsize": 135168, 00:17:14.513 "large_pool_count": 1024, 00:17:14.513 "small_bufsize": 8192, 00:17:14.513 "small_pool_count": 8192 00:17:14.513 } 00:17:14.513 } 00:17:14.513 ] 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "subsystem": "sock", 00:17:14.513 "config": [ 00:17:14.513 { 00:17:14.513 "method": "sock_impl_set_options", 00:17:14.513 "params": { 00:17:14.513 "enable_ktls": false, 00:17:14.513 "enable_placement_id": 0, 00:17:14.513 "enable_quickack": false, 00:17:14.513 "enable_recv_pipe": true, 00:17:14.513 "enable_zerocopy_send_client": false, 00:17:14.513 "enable_zerocopy_send_server": true, 00:17:14.513 "impl_name": "posix", 00:17:14.513 "recv_buf_size": 2097152, 00:17:14.513 "send_buf_size": 2097152, 00:17:14.513 "tls_version": 0, 00:17:14.513 "zerocopy_threshold": 0 00:17:14.513 } 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "method": "sock_impl_set_options", 00:17:14.513 "params": { 00:17:14.513 "enable_ktls": false, 00:17:14.513 "enable_placement_id": 0, 00:17:14.513 "enable_quickack": false, 00:17:14.513 "enable_recv_pipe": true, 00:17:14.513 "enable_zerocopy_send_client": false, 00:17:14.513 "enable_zerocopy_send_server": true, 00:17:14.513 "impl_name": "ssl", 00:17:14.513 "recv_buf_size": 4096, 00:17:14.513 "send_buf_size": 4096, 00:17:14.513 "tls_version": 0, 00:17:14.513 "zerocopy_threshold": 0 00:17:14.513 } 00:17:14.513 } 00:17:14.513 ] 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "subsystem": "vmd", 00:17:14.513 "config": [] 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "subsystem": "accel", 00:17:14.513 "config": [ 00:17:14.513 { 00:17:14.513 "method": "accel_set_options", 00:17:14.513 "params": { 00:17:14.513 "buf_count": 2048, 00:17:14.513 "large_cache_size": 16, 00:17:14.513 "sequence_count": 2048, 00:17:14.513 "small_cache_size": 128, 00:17:14.513 "task_count": 2048 00:17:14.513 } 00:17:14.513 } 00:17:14.513 ] 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "subsystem": "bdev", 00:17:14.513 "config": [ 00:17:14.513 { 00:17:14.513 "method": "bdev_set_options", 00:17:14.513 "params": { 00:17:14.513 "bdev_auto_examine": true, 00:17:14.513 "bdev_io_cache_size": 256, 00:17:14.513 "bdev_io_pool_size": 65535, 00:17:14.513 "iobuf_large_cache_size": 16, 00:17:14.513 "iobuf_small_cache_size": 128 00:17:14.513 } 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "method": "bdev_raid_set_options", 00:17:14.513 "params": { 00:17:14.513 "process_window_size_kb": 1024 00:17:14.513 } 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "method": "bdev_iscsi_set_options", 00:17:14.513 "params": { 00:17:14.513 "timeout_sec": 30 00:17:14.513 } 00:17:14.513 }, 00:17:14.513 { 00:17:14.513 "method": "bdev_nvme_set_options", 00:17:14.513 "params": { 00:17:14.514 "action_on_timeout": "none", 00:17:14.514 "allow_accel_sequence": false, 00:17:14.514 "arbitration_burst": 0, 00:17:14.514 "bdev_retry_count": 3, 00:17:14.514 "ctrlr_loss_timeout_sec": 0, 00:17:14.514 "delay_cmd_submit": true, 00:17:14.514 "dhchap_dhgroups": [ 00:17:14.514 "null", 00:17:14.514 "ffdhe2048", 00:17:14.514 "ffdhe3072", 00:17:14.514 "ffdhe4096", 00:17:14.514 "ffdhe6144", 00:17:14.514 "ffdhe8192" 00:17:14.514 ], 00:17:14.514 "dhchap_digests": [ 00:17:14.514 "sha256", 00:17:14.514 "sha384", 00:17:14.514 "sha512" 00:17:14.514 ], 00:17:14.514 "disable_auto_failback": false, 00:17:14.514 "fast_io_fail_timeout_sec": 0, 00:17:14.514 "generate_uuids": false, 00:17:14.514 "high_priority_weight": 0, 00:17:14.514 "io_path_stat": false, 00:17:14.514 "io_queue_requests": 512, 00:17:14.514 "keep_alive_timeout_ms": 10000, 00:17:14.514 "low_priority_weight": 0, 00:17:14.514 "medium_priority_weight": 0, 00:17:14.514 "nvme_adminq_poll_period_us": 10000, 00:17:14.514 "nvme_error_stat": false, 00:17:14.514 "nvme_ioq_poll_period_us": 0, 00:17:14.514 "rdma_cm_event_timeout_ms": 0, 00:17:14.514 "rdma_max_cq_size": 0, 00:17:14.514 "rdma_srq_size": 0, 00:17:14.514 "reconnect_delay_sec": 0, 00:17:14.514 "timeout_admin_us": 0, 00:17:14.514 "timeout_us": 0, 00:17:14.514 "transport_ack_timeout": 0, 00:17:14.514 "transport_retry_count": 4, 00:17:14.514 "transport_tos": 0 00:17:14.514 } 00:17:14.514 }, 00:17:14.514 { 00:17:14.514 "method": "bdev_nvme_attach_controller", 00:17:14.514 "params": { 00:17:14.514 "adrfam": "IPv4", 00:17:14.514 "ctrlr_loss_timeout_sec": 0, 00:17:14.514 "ddgst": false, 00:17:14.514 "fast_io_fail_timeout_sec": 0, 00:17:14.514 "hdgst": false, 00:17:14.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.514 "name": "TLSTEST", 00:17:14.514 "prchk_guard": false, 00:17:14.514 "prchk_reftag": false, 00:17:14.514 "psk": "/tmp/tmp.LiZCnHE9db", 00:17:14.514 "reconnect_delay_sec": 0, 00:17:14.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.514 "traddr": "10.0.0.2", 00:17:14.514 "trsvcid": "4420", 00:17:14.514 "trtype": "TCP" 00:17:14.514 } 00:17:14.514 }, 00:17:14.514 { 00:17:14.514 "method": "bdev_nvme_set_hotplug", 00:17:14.514 "params": { 00:17:14.514 "enable": false, 00:17:14.514 "period_us": 100000 00:17:14.514 } 00:17:14.514 }, 00:17:14.514 { 00:17:14.514 "method": "bdev_wait_for_examine" 00:17:14.514 } 00:17:14.514 ] 00:17:14.514 }, 00:17:14.514 { 00:17:14.514 "subsystem": "nbd", 00:17:14.514 "config": [] 00:17:14.514 } 00:17:14.514 ] 00:17:14.514 }' 00:17:14.514 17:20:44 -- target/tls.sh@199 -- # killprocess 82544 00:17:14.514 17:20:44 -- common/autotest_common.sh@936 -- # '[' -z 82544 ']' 00:17:14.514 17:20:44 -- common/autotest_common.sh@940 -- # kill -0 82544 00:17:14.514 17:20:44 -- common/autotest_common.sh@941 -- # uname 00:17:14.514 17:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.514 17:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82544 00:17:14.514 17:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.514 17:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.514 killing process with pid 82544 00:17:14.514 17:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82544' 00:17:14.514 Received shutdown signal, test time was about 10.000000 seconds 00:17:14.514 00:17:14.514 Latency(us) 00:17:14.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.514 =================================================================================================================== 00:17:14.514 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.514 17:20:44 -- common/autotest_common.sh@955 -- # kill 82544 00:17:14.514 [2024-04-25 17:20:44.241219] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:14.514 17:20:44 -- common/autotest_common.sh@960 -- # wait 82544 00:17:14.514 17:20:44 -- target/tls.sh@200 -- # killprocess 82447 00:17:14.514 17:20:44 -- common/autotest_common.sh@936 -- # '[' -z 82447 ']' 00:17:14.514 17:20:44 -- common/autotest_common.sh@940 -- # kill -0 82447 00:17:14.514 17:20:44 -- common/autotest_common.sh@941 -- # uname 00:17:14.514 17:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.514 17:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82447 00:17:14.514 17:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:14.514 17:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:14.514 killing process with pid 82447 00:17:14.514 17:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82447' 00:17:14.514 17:20:44 -- common/autotest_common.sh@955 -- # kill 82447 00:17:14.514 [2024-04-25 17:20:44.435280] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:14.514 17:20:44 -- common/autotest_common.sh@960 -- # wait 82447 00:17:14.773 17:20:44 -- target/tls.sh@203 -- # echo '{ 00:17:14.773 "subsystems": [ 00:17:14.773 { 00:17:14.773 "subsystem": "keyring", 00:17:14.773 "config": [] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "iobuf", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "iobuf_set_options", 00:17:14.773 "params": { 00:17:14.773 "large_bufsize": 135168, 00:17:14.773 "large_pool_count": 1024, 00:17:14.773 "small_bufsize": 8192, 00:17:14.773 "small_pool_count": 8192 00:17:14.773 } 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "sock", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "sock_impl_set_options", 00:17:14.773 "params": { 00:17:14.773 "enable_ktls": false, 00:17:14.773 "enable_placement_id": 0, 00:17:14.773 "enable_quickack": false, 00:17:14.773 "enable_recv_pipe": true, 00:17:14.773 "enable_zerocopy_send_client": false, 00:17:14.773 "enable_zerocopy_send_server": true, 00:17:14.773 "impl_name": "posix", 00:17:14.773 "recv_buf_size": 2097152, 00:17:14.773 "send_buf_size": 2097152, 00:17:14.773 "tls_version": 0, 00:17:14.773 "zerocopy_threshold": 0 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "sock_impl_set_options", 00:17:14.773 "params": { 00:17:14.773 "enable_ktls": false, 00:17:14.773 "enable_placement_id": 0, 00:17:14.773 "enable_quickack": false, 00:17:14.773 "enable_recv_pipe": true, 00:17:14.773 "enable_zerocopy_send_client": false, 00:17:14.773 "enable_zerocopy_send_server": true, 00:17:14.773 "impl_name": "ssl", 00:17:14.773 "recv_buf_size": 4096, 00:17:14.773 "send_buf_size": 4096, 00:17:14.773 "tls_version": 0, 00:17:14.773 "zerocopy_threshold": 0 00:17:14.773 } 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "vmd", 00:17:14.773 "config": [] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "accel", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "accel_set_options", 00:17:14.773 "params": { 00:17:14.773 "buf_count": 2048, 00:17:14.773 "large_cache_size": 16, 00:17:14.773 "sequence_count": 2048, 00:17:14.773 "small_cache_size": 128, 00:17:14.773 "task_count": 2048 00:17:14.773 } 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "bdev", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "bdev_set_options", 00:17:14.773 "params": { 00:17:14.773 "bdev_auto_examine": true, 00:17:14.773 "bdev_io_cache_size": 256, 00:17:14.773 "bdev_io_pool_size": 65535, 00:17:14.773 "iobuf_large_cache_size": 16, 00:17:14.773 "iobuf_small_cache_size": 128 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_raid_set_options", 00:17:14.773 "params": { 00:17:14.773 "process_window_size_kb": 1024 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_iscsi_set_options", 00:17:14.773 "params": { 00:17:14.773 "timeout_sec": 30 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_nvme_set_options", 00:17:14.773 "params": { 00:17:14.773 "action_on_timeout": "none", 00:17:14.773 "allow_accel_sequence": false, 00:17:14.773 "arbitration_burst": 0, 00:17:14.773 "bdev_retry_count": 3, 00:17:14.773 "ctrlr_loss_timeout_sec": 0, 00:17:14.773 "delay_cmd_submit": true, 00:17:14.773 "dhchap_dhgroups": [ 00:17:14.773 "null", 00:17:14.773 "ffdhe2048", 00:17:14.773 "ffdhe3072", 00:17:14.773 "ffdhe4096", 00:17:14.773 "ffdhe6144", 00:17:14.773 "ffdhe8192" 00:17:14.773 ], 00:17:14.773 "dhchap_digests": [ 00:17:14.773 "sha256", 00:17:14.773 "sha384", 00:17:14.773 "sha512" 00:17:14.773 ], 00:17:14.773 "disable_auto_failback": false, 00:17:14.773 "fast_io_fail_timeout_sec": 0, 00:17:14.773 "generate_uuids": false, 00:17:14.773 "high_priority_weight": 0, 00:17:14.773 "io_path_stat": false, 00:17:14.773 "io_queue_requests": 0, 00:17:14.773 "keep_alive_timeout_ms": 10000, 00:17:14.773 "low_priority_weight": 0, 00:17:14.773 "medium_priority_weight": 0, 00:17:14.773 "nvme_adminq_poll_period_us": 10000, 00:17:14.773 "nvme_error_stat": false, 00:17:14.773 "nvme_ioq_poll_period_us": 0, 00:17:14.773 "rdma_cm_event_timeout_ms": 0, 00:17:14.773 "rdma_max_cq_size": 0, 00:17:14.773 "rdma_srq_size": 0, 00:17:14.773 "reconnect_delay_sec": 0, 00:17:14.773 "timeout_admin_us": 0, 00:17:14.773 "timeout_us": 0, 00:17:14.773 "transport_ack_timeout": 0, 00:17:14.773 "transport_retry_count": 4, 00:17:14.773 "transport_tos": 0 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_nvme_set_hotplug", 00:17:14.773 "params": { 00:17:14.773 17:20:44 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:14.773 "enable": false, 00:17:14.773 "period_us": 100000 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_malloc_create", 00:17:14.773 "params": { 00:17:14.773 "block_size": 4096, 00:17:14.773 "name": "malloc0", 00:17:14.773 "num_blocks": 8192, 00:17:14.773 "optimal_io_boundary": 0, 00:17:14.773 "physical_block_size": 4096, 00:17:14.773 "uuid": "b456ddff-9dce-4de9-920d-a1b72b92f588" 00:17:14.773 } 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "method": "bdev_wait_for_examine" 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "nbd", 00:17:14.773 "config": [] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "scheduler", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "framework_set_scheduler", 00:17:14.773 "params": { 00:17:14.773 "name": "static" 00:17:14.773 } 00:17:14.773 } 00:17:14.773 ] 00:17:14.773 }, 00:17:14.773 { 00:17:14.773 "subsystem": "nvmf", 00:17:14.773 "config": [ 00:17:14.773 { 00:17:14.773 "method": "nvmf_set_config", 00:17:14.773 "params": { 00:17:14.773 "admin_cmd_passthru": { 00:17:14.774 "identify_ctrlr": false 00:17:14.774 }, 00:17:14.774 "discovery_filter": "match_any" 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_set_max_subsystems", 00:17:14.774 "params": { 00:17:14.774 "max_subsystems": 1024 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_set_crdt", 00:17:14.774 "params": { 00:17:14.774 "crdt1": 0, 00:17:14.774 "crdt2": 0, 00:17:14.774 "crdt3": 0 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_create_transport", 00:17:14.774 "params": { 00:17:14.774 "abort_timeout_sec": 1, 00:17:14.774 "ack_timeout": 0, 00:17:14.774 "buf_cache_size": 4294967295, 00:17:14.774 "c2h_success": false, 00:17:14.774 "data_wr_pool_size": 0, 00:17:14.774 "dif_insert_or_strip": false, 00:17:14.774 "in_capsule_data_size": 4096, 00:17:14.774 "io_unit_size": 131072, 00:17:14.774 "max_aq_depth": 128, 00:17:14.774 "max_io_qpairs_per_ctrlr": 127, 00:17:14.774 "max_io_size": 131072, 00:17:14.774 "max_queue_depth": 128, 00:17:14.774 "num_shared_buffers": 511, 00:17:14.774 "sock_priority": 0, 00:17:14.774 "trtype": "TCP", 00:17:14.774 "zcopy": false 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_create_subsystem", 00:17:14.774 "params": { 00:17:14.774 "allow_any_host": false, 00:17:14.774 "ana_reporting": false, 00:17:14.774 "max_cntlid": 65519, 00:17:14.774 "max_namespaces": 10, 00:17:14.774 "min_cntlid": 1, 00:17:14.774 "model_number": "SPDK bdev Controller", 00:17:14.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.774 "serial_number": "SPDK00000000000001" 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_subsystem_add_host", 00:17:14.774 "params": { 00:17:14.774 "host": "nqn.2016-06.io.spdk:host1", 00:17:14.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.774 "psk": "/tmp/tmp.LiZCnHE9db" 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_subsystem_add_ns", 00:17:14.774 "params": { 00:17:14.774 "namespace": { 00:17:14.774 "bdev_name": "malloc0", 00:17:14.774 "nguid": "B456DDFF9DCE4DE9920DA1B72B92F588", 00:17:14.774 "no_auto_visible": false, 00:17:14.774 "nsid": 1, 00:17:14.774 "uuid": "b456ddff-9dce-4de9-920d-a1b72b92f588" 00:17:14.774 }, 00:17:14.774 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:14.774 } 00:17:14.774 }, 00:17:14.774 { 00:17:14.774 "method": "nvmf_subsystem_add_listener", 00:17:14.774 "params": { 00:17:14.774 "listen_address": { 00:17:14.774 "adrfam": "IPv4", 00:17:14.774 "traddr": "10.0.0.2", 00:17:14.774 "trsvcid": "4420", 00:17:14.774 "trtype": "TCP" 00:17:14.774 }, 00:17:14.774 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.774 "secure_channel": true 00:17:14.774 } 00:17:14.774 } 00:17:14.774 ] 00:17:14.774 } 00:17:14.774 ] 00:17:14.774 }' 00:17:14.774 17:20:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:14.774 17:20:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:14.774 17:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.774 17:20:44 -- nvmf/common.sh@470 -- # nvmfpid=82623 00:17:14.774 17:20:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:14.774 17:20:44 -- nvmf/common.sh@471 -- # waitforlisten 82623 00:17:14.774 17:20:44 -- common/autotest_common.sh@817 -- # '[' -z 82623 ']' 00:17:14.774 17:20:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.774 17:20:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.774 17:20:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.774 17:20:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.774 17:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.774 [2024-04-25 17:20:44.663397] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:14.774 [2024-04-25 17:20:44.663489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.033 [2024-04-25 17:20:44.796461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.033 [2024-04-25 17:20:44.846369] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.033 [2024-04-25 17:20:44.846421] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.033 [2024-04-25 17:20:44.846447] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.033 [2024-04-25 17:20:44.846453] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.033 [2024-04-25 17:20:44.846459] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.033 [2024-04-25 17:20:44.846543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.292 [2024-04-25 17:20:45.018580] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.292 [2024-04-25 17:20:45.034533] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:15.292 [2024-04-25 17:20:45.050528] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:15.292 [2024-04-25 17:20:45.050768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.859 17:20:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:15.859 17:20:45 -- common/autotest_common.sh@850 -- # return 0 00:17:15.859 17:20:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:15.859 17:20:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:15.859 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 17:20:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.859 17:20:45 -- target/tls.sh@207 -- # bdevperf_pid=82667 00:17:15.859 17:20:45 -- target/tls.sh@208 -- # waitforlisten 82667 /var/tmp/bdevperf.sock 00:17:15.859 17:20:45 -- common/autotest_common.sh@817 -- # '[' -z 82667 ']' 00:17:15.859 17:20:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.859 17:20:45 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:15.859 17:20:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.859 17:20:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.859 17:20:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.859 17:20:45 -- target/tls.sh@204 -- # echo '{ 00:17:15.859 "subsystems": [ 00:17:15.859 { 00:17:15.859 "subsystem": "keyring", 00:17:15.859 "config": [] 00:17:15.859 }, 00:17:15.859 { 00:17:15.859 "subsystem": "iobuf", 00:17:15.859 "config": [ 00:17:15.859 { 00:17:15.859 "method": "iobuf_set_options", 00:17:15.859 "params": { 00:17:15.859 "large_bufsize": 135168, 00:17:15.859 "large_pool_count": 1024, 00:17:15.859 "small_bufsize": 8192, 00:17:15.859 "small_pool_count": 8192 00:17:15.859 } 00:17:15.859 } 00:17:15.859 ] 00:17:15.859 }, 00:17:15.859 { 00:17:15.859 "subsystem": "sock", 00:17:15.859 "config": [ 00:17:15.859 { 00:17:15.859 "method": "sock_impl_set_options", 00:17:15.859 "params": { 00:17:15.859 "enable_ktls": false, 00:17:15.859 "enable_placement_id": 0, 00:17:15.859 "enable_quickack": false, 00:17:15.859 "enable_recv_pipe": true, 00:17:15.860 "enable_zerocopy_send_client": false, 00:17:15.860 "enable_zerocopy_send_server": true, 00:17:15.860 "impl_name": "posix", 00:17:15.860 "recv_buf_size": 2097152, 00:17:15.860 "send_buf_size": 2097152, 00:17:15.860 "tls_version": 0, 00:17:15.860 "zerocopy_threshold": 0 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "sock_impl_set_options", 00:17:15.860 "params": { 00:17:15.860 "enable_ktls": false, 00:17:15.860 "enable_placement_id": 0, 00:17:15.860 "enable_quickack": false, 00:17:15.860 "enable_recv_pipe": true, 00:17:15.860 "enable_zerocopy_send_client": false, 00:17:15.860 "enable_zerocopy_send_server": true, 00:17:15.860 "impl_name": "ssl", 00:17:15.860 "recv_buf_size": 4096, 00:17:15.860 "send_buf_size": 4096, 00:17:15.860 "tls_version": 0, 00:17:15.860 "zerocopy_threshold": 0 00:17:15.860 } 00:17:15.860 } 00:17:15.860 ] 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "subsystem": "vmd", 00:17:15.860 "config": [] 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "subsystem": "accel", 00:17:15.860 "config": [ 00:17:15.860 { 00:17:15.860 "method": "accel_set_options", 00:17:15.860 "params": { 00:17:15.860 "buf_count": 2048, 00:17:15.860 "large_cache_size": 16, 00:17:15.860 "sequence_count": 2048, 00:17:15.860 "small_cache_size": 128, 00:17:15.860 "task_count": 2048 00:17:15.860 } 00:17:15.860 } 00:17:15.860 ] 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "subsystem": "bdev", 00:17:15.860 "config": [ 00:17:15.860 { 00:17:15.860 "method": "bdev_set_options", 00:17:15.860 "params": { 00:17:15.860 "bdev_auto_examine": true, 00:17:15.860 "bdev_io_cache_size": 256, 00:17:15.860 "bdev_io_pool_size": 65535, 00:17:15.860 "iobuf_large_cache_size": 16, 00:17:15.860 "iobuf_small_cache_size": 128 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_raid_set_options", 00:17:15.860 "params": { 00:17:15.860 "process_window_size_kb": 1024 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_iscsi_set_options", 00:17:15.860 "params": { 00:17:15.860 "timeout_sec": 30 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_nvme_set_options", 00:17:15.860 "params": { 00:17:15.860 "action_on_timeout": "none", 00:17:15.860 "allow_accel_sequence": false, 00:17:15.860 "arbitration_burst": 0, 00:17:15.860 "bdev_retry_count": 3, 00:17:15.860 "ctrlr_loss_timeout_sec": 0, 00:17:15.860 "delay_cmd_submit": true, 00:17:15.860 "dhchap_dhgroups": [ 00:17:15.860 "null", 00:17:15.860 "ffdhe2048", 00:17:15.860 "ffdhe3072", 00:17:15.860 "ffdhe4096", 00:17:15.860 "ffdhe6144", 00:17:15.860 "ffdhe8192" 00:17:15.860 ], 00:17:15.860 "dhchap_digests": [ 00:17:15.860 "sha256", 00:17:15.860 "sha384", 00:17:15.860 "sha512" 00:17:15.860 ], 00:17:15.860 "disable_auto_failback": false, 00:17:15.860 "fast_io_fail_timeout_sec": 0, 00:17:15.860 "generate_uuids": false, 00:17:15.860 "high_priority_weight": 0, 00:17:15.860 "io_path_stat": false, 00:17:15.860 "io_queue_requests": 512, 00:17:15.860 "keep_alive_timeout_ms": 10000, 00:17:15.860 "low_priority_weight": 0, 00:17:15.860 "medium_priority_weight": 0, 00:17:15.860 "nvme_adminq_poll_period_us": 10000, 00:17:15.860 "nvme_error_stat": false, 00:17:15.860 "nvme_ioq_poll_period_us": 0, 00:17:15.860 "rdma_cm_event_timeout_ms": 0, 00:17:15.860 "rdma_max_cq_size": 0, 00:17:15.860 "rdma_srq_size": 0, 00:17:15.860 "reconnect_delay_sec": 0, 00:17:15.860 "timeout_admin_us": 0, 00:17:15.860 "timeout_us": 0, 00:17:15.860 "transport_ack_timeout": 0, 00:17:15.860 "transport_retry_count": 4, 00:17:15.860 "transport_tos": 0 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_nvme_attach_controller", 00:17:15.860 "params": { 00:17:15.860 "adrfam": "IPv4", 00:17:15.860 "ctrlr_loss_timeout_sec": 0, 00:17:15.860 "ddgst": false, 00:17:15.860 "fast_io_fail_timeout_sec": 0, 00:17:15.860 "hdgst": false, 00:17:15.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.860 "name": "TLSTEST", 00:17:15.860 "prchk_guard": false, 00:17:15.860 "prchk_reftag": false, 00:17:15.860 "psk": "/tmp/tmp.LiZCnHE9db", 00:17:15.860 "reconnect_delay_sec": 0, 00:17:15.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.860 "traddr": "10.0.0.2", 00:17:15.860 "trsvcid": "4420", 00:17:15.860 "trtype": "TCP" 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_nvme_set_hotplug", 00:17:15.860 "params": { 00:17:15.860 "enable": false, 00:17:15.860 "period_us": 100000 00:17:15.860 } 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "method": "bdev_wait_for_examine" 00:17:15.860 } 00:17:15.860 ] 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "subsystem": "nbd", 00:17:15.860 "config": [] 00:17:15.860 } 00:17:15.860 ] 00:17:15.860 }' 00:17:15.860 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.860 [2024-04-25 17:20:45.653943] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:15.860 [2024-04-25 17:20:45.654040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82667 ] 00:17:15.860 [2024-04-25 17:20:45.790067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.120 [2024-04-25 17:20:45.858198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.120 [2024-04-25 17:20:45.982000] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.120 [2024-04-25 17:20:45.982140] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:16.687 17:20:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.687 17:20:46 -- common/autotest_common.sh@850 -- # return 0 00:17:16.687 17:20:46 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.946 Running I/O for 10 seconds... 00:17:26.927 00:17:26.927 Latency(us) 00:17:26.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.927 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.927 Verification LBA range: start 0x0 length 0x2000 00:17:26.927 TLSTESTn1 : 10.02 4407.17 17.22 0.00 0.00 28989.92 8460.10 19660.80 00:17:26.927 =================================================================================================================== 00:17:26.927 Total : 4407.17 17.22 0.00 0.00 28989.92 8460.10 19660.80 00:17:26.927 0 00:17:26.927 17:20:56 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.927 17:20:56 -- target/tls.sh@214 -- # killprocess 82667 00:17:26.927 17:20:56 -- common/autotest_common.sh@936 -- # '[' -z 82667 ']' 00:17:26.927 17:20:56 -- common/autotest_common.sh@940 -- # kill -0 82667 00:17:26.927 17:20:56 -- common/autotest_common.sh@941 -- # uname 00:17:26.927 17:20:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.927 17:20:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82667 00:17:26.927 17:20:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:26.927 killing process with pid 82667 00:17:26.927 17:20:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:26.927 17:20:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82667' 00:17:26.927 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.927 00:17:26.927 Latency(us) 00:17:26.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.928 =================================================================================================================== 00:17:26.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.928 17:20:56 -- common/autotest_common.sh@955 -- # kill 82667 00:17:26.928 [2024-04-25 17:20:56.763536] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:26.928 17:20:56 -- common/autotest_common.sh@960 -- # wait 82667 00:17:27.187 17:20:56 -- target/tls.sh@215 -- # killprocess 82623 00:17:27.187 17:20:56 -- common/autotest_common.sh@936 -- # '[' -z 82623 ']' 00:17:27.187 17:20:56 -- common/autotest_common.sh@940 -- # kill -0 82623 00:17:27.187 17:20:56 -- common/autotest_common.sh@941 -- # uname 00:17:27.187 17:20:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.187 17:20:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82623 00:17:27.187 17:20:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:27.187 17:20:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:27.187 killing process with pid 82623 00:17:27.187 17:20:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82623' 00:17:27.187 17:20:56 -- common/autotest_common.sh@955 -- # kill 82623 00:17:27.187 [2024-04-25 17:20:56.969556] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:27.187 17:20:56 -- common/autotest_common.sh@960 -- # wait 82623 00:17:27.187 17:20:57 -- target/tls.sh@218 -- # nvmfappstart 00:17:27.187 17:20:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:27.187 17:20:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:27.187 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:17:27.187 17:20:57 -- nvmf/common.sh@470 -- # nvmfpid=82812 00:17:27.187 17:20:57 -- nvmf/common.sh@471 -- # waitforlisten 82812 00:17:27.187 17:20:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:27.187 17:20:57 -- common/autotest_common.sh@817 -- # '[' -z 82812 ']' 00:17:27.187 17:20:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.187 17:20:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.187 17:20:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.187 17:20:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.187 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:17:27.446 [2024-04-25 17:20:57.205883] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:27.446 [2024-04-25 17:20:57.205983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.446 [2024-04-25 17:20:57.344374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.446 [2024-04-25 17:20:57.411382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.446 [2024-04-25 17:20:57.411449] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.446 [2024-04-25 17:20:57.411464] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.446 [2024-04-25 17:20:57.411475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.446 [2024-04-25 17:20:57.411484] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.446 [2024-04-25 17:20:57.411518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.380 17:20:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.380 17:20:58 -- common/autotest_common.sh@850 -- # return 0 00:17:28.380 17:20:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:28.380 17:20:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:28.380 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:17:28.380 17:20:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.380 17:20:58 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LiZCnHE9db 00:17:28.380 17:20:58 -- target/tls.sh@49 -- # local key=/tmp/tmp.LiZCnHE9db 00:17:28.380 17:20:58 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:28.637 [2024-04-25 17:20:58.382270] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.637 17:20:58 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:28.895 17:20:58 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:28.895 [2024-04-25 17:20:58.862412] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:28.895 [2024-04-25 17:20:58.862603] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.153 17:20:58 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:29.416 malloc0 00:17:29.416 17:20:59 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.699 17:20:59 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LiZCnHE9db 00:17:29.699 [2024-04-25 17:20:59.597628] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:29.699 17:20:59 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:29.699 17:20:59 -- target/tls.sh@222 -- # bdevperf_pid=82909 00:17:29.699 17:20:59 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.699 17:20:59 -- target/tls.sh@225 -- # waitforlisten 82909 /var/tmp/bdevperf.sock 00:17:29.699 17:20:59 -- common/autotest_common.sh@817 -- # '[' -z 82909 ']' 00:17:29.699 17:20:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.699 17:20:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:29.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.699 17:20:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.699 17:20:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:29.699 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:17:29.699 [2024-04-25 17:20:59.658109] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:29.699 [2024-04-25 17:20:59.658225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82909 ] 00:17:29.970 [2024-04-25 17:20:59.794077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.970 [2024-04-25 17:20:59.847594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.906 17:21:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:30.906 17:21:00 -- common/autotest_common.sh@850 -- # return 0 00:17:30.906 17:21:00 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LiZCnHE9db 00:17:30.906 17:21:00 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:31.165 [2024-04-25 17:21:01.103427] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.423 nvme0n1 00:17:31.423 17:21:01 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.423 Running I/O for 1 seconds... 00:17:32.361 00:17:32.361 Latency(us) 00:17:32.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.361 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:32.361 Verification LBA range: start 0x0 length 0x2000 00:17:32.361 nvme0n1 : 1.02 4174.03 16.30 0.00 0.00 30254.45 3991.74 18945.86 00:17:32.361 =================================================================================================================== 00:17:32.361 Total : 4174.03 16.30 0.00 0.00 30254.45 3991.74 18945.86 00:17:32.361 0 00:17:32.361 17:21:02 -- target/tls.sh@234 -- # killprocess 82909 00:17:32.361 17:21:02 -- common/autotest_common.sh@936 -- # '[' -z 82909 ']' 00:17:32.361 17:21:02 -- common/autotest_common.sh@940 -- # kill -0 82909 00:17:32.361 17:21:02 -- common/autotest_common.sh@941 -- # uname 00:17:32.361 17:21:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.361 17:21:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82909 00:17:32.620 killing process with pid 82909 00:17:32.620 Received shutdown signal, test time was about 1.000000 seconds 00:17:32.620 00:17:32.620 Latency(us) 00:17:32.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.620 =================================================================================================================== 00:17:32.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.620 17:21:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.620 17:21:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.620 17:21:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82909' 00:17:32.620 17:21:02 -- common/autotest_common.sh@955 -- # kill 82909 00:17:32.620 17:21:02 -- common/autotest_common.sh@960 -- # wait 82909 00:17:32.620 17:21:02 -- target/tls.sh@235 -- # killprocess 82812 00:17:32.620 17:21:02 -- common/autotest_common.sh@936 -- # '[' -z 82812 ']' 00:17:32.620 17:21:02 -- common/autotest_common.sh@940 -- # kill -0 82812 00:17:32.620 17:21:02 -- common/autotest_common.sh@941 -- # uname 00:17:32.620 17:21:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.620 17:21:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82812 00:17:32.620 killing process with pid 82812 00:17:32.620 17:21:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:32.620 17:21:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:32.620 17:21:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82812' 00:17:32.620 17:21:02 -- common/autotest_common.sh@955 -- # kill 82812 00:17:32.620 [2024-04-25 17:21:02.553892] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:32.620 17:21:02 -- common/autotest_common.sh@960 -- # wait 82812 00:17:32.880 17:21:02 -- target/tls.sh@238 -- # nvmfappstart 00:17:32.880 17:21:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:32.880 17:21:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:32.880 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.880 17:21:02 -- nvmf/common.sh@470 -- # nvmfpid=82983 00:17:32.880 17:21:02 -- nvmf/common.sh@471 -- # waitforlisten 82983 00:17:32.880 17:21:02 -- common/autotest_common.sh@817 -- # '[' -z 82983 ']' 00:17:32.880 17:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.880 17:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:32.880 17:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.880 17:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:32.880 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.880 17:21:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:32.880 [2024-04-25 17:21:02.789367] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:32.880 [2024-04-25 17:21:02.789450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.139 [2024-04-25 17:21:02.925021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.139 [2024-04-25 17:21:02.981138] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.139 [2024-04-25 17:21:02.981215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.139 [2024-04-25 17:21:02.981241] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.139 [2024-04-25 17:21:02.981249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.139 [2024-04-25 17:21:02.981255] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.139 [2024-04-25 17:21:02.981287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.139 17:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.139 17:21:03 -- common/autotest_common.sh@850 -- # return 0 00:17:33.139 17:21:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:33.139 17:21:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:33.139 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.139 17:21:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.139 17:21:03 -- target/tls.sh@239 -- # rpc_cmd 00:17:33.139 17:21:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.139 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.139 [2024-04-25 17:21:03.113410] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.397 malloc0 00:17:33.397 [2024-04-25 17:21:03.139712] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.397 [2024-04-25 17:21:03.139930] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.397 17:21:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.397 17:21:03 -- target/tls.sh@252 -- # bdevperf_pid=83021 00:17:33.397 17:21:03 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:33.397 17:21:03 -- target/tls.sh@254 -- # waitforlisten 83021 /var/tmp/bdevperf.sock 00:17:33.397 17:21:03 -- common/autotest_common.sh@817 -- # '[' -z 83021 ']' 00:17:33.397 17:21:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:33.397 17:21:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:33.397 17:21:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:33.397 17:21:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.397 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.397 [2024-04-25 17:21:03.217174] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:33.397 [2024-04-25 17:21:03.217257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83021 ] 00:17:33.397 [2024-04-25 17:21:03.350330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.656 [2024-04-25 17:21:03.402749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.656 17:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.656 17:21:03 -- common/autotest_common.sh@850 -- # return 0 00:17:33.656 17:21:03 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LiZCnHE9db 00:17:33.914 17:21:03 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:34.172 [2024-04-25 17:21:03.901305] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.172 nvme0n1 00:17:34.172 17:21:03 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.172 Running I/O for 1 seconds... 00:17:35.549 00:17:35.549 Latency(us) 00:17:35.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.549 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:35.549 Verification LBA range: start 0x0 length 0x2000 00:17:35.549 nvme0n1 : 1.02 4415.66 17.25 0.00 0.00 28663.95 1057.51 20018.27 00:17:35.549 =================================================================================================================== 00:17:35.549 Total : 4415.66 17.25 0.00 0.00 28663.95 1057.51 20018.27 00:17:35.549 0 00:17:35.549 17:21:05 -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:35.549 17:21:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.549 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.549 17:21:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.549 17:21:05 -- target/tls.sh@263 -- # tgtcfg='{ 00:17:35.549 "subsystems": [ 00:17:35.549 { 00:17:35.549 "subsystem": "keyring", 00:17:35.549 "config": [ 00:17:35.549 { 00:17:35.549 "method": "keyring_file_add_key", 00:17:35.549 "params": { 00:17:35.549 "name": "key0", 00:17:35.549 "path": "/tmp/tmp.LiZCnHE9db" 00:17:35.549 } 00:17:35.549 } 00:17:35.549 ] 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "subsystem": "iobuf", 00:17:35.549 "config": [ 00:17:35.549 { 00:17:35.549 "method": "iobuf_set_options", 00:17:35.549 "params": { 00:17:35.549 "large_bufsize": 135168, 00:17:35.549 "large_pool_count": 1024, 00:17:35.549 "small_bufsize": 8192, 00:17:35.549 "small_pool_count": 8192 00:17:35.549 } 00:17:35.549 } 00:17:35.549 ] 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "subsystem": "sock", 00:17:35.549 "config": [ 00:17:35.549 { 00:17:35.549 "method": "sock_impl_set_options", 00:17:35.549 "params": { 00:17:35.549 "enable_ktls": false, 00:17:35.549 "enable_placement_id": 0, 00:17:35.549 "enable_quickack": false, 00:17:35.549 "enable_recv_pipe": true, 00:17:35.549 "enable_zerocopy_send_client": false, 00:17:35.549 "enable_zerocopy_send_server": true, 00:17:35.549 "impl_name": "posix", 00:17:35.549 "recv_buf_size": 2097152, 00:17:35.549 "send_buf_size": 2097152, 00:17:35.549 "tls_version": 0, 00:17:35.549 "zerocopy_threshold": 0 00:17:35.549 } 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "method": "sock_impl_set_options", 00:17:35.549 "params": { 00:17:35.549 "enable_ktls": false, 00:17:35.549 "enable_placement_id": 0, 00:17:35.549 "enable_quickack": false, 00:17:35.549 "enable_recv_pipe": true, 00:17:35.549 "enable_zerocopy_send_client": false, 00:17:35.549 "enable_zerocopy_send_server": true, 00:17:35.549 "impl_name": "ssl", 00:17:35.549 "recv_buf_size": 4096, 00:17:35.549 "send_buf_size": 4096, 00:17:35.549 "tls_version": 0, 00:17:35.549 "zerocopy_threshold": 0 00:17:35.549 } 00:17:35.549 } 00:17:35.549 ] 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "subsystem": "vmd", 00:17:35.549 "config": [] 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "subsystem": "accel", 00:17:35.549 "config": [ 00:17:35.549 { 00:17:35.549 "method": "accel_set_options", 00:17:35.549 "params": { 00:17:35.549 "buf_count": 2048, 00:17:35.549 "large_cache_size": 16, 00:17:35.549 "sequence_count": 2048, 00:17:35.549 "small_cache_size": 128, 00:17:35.549 "task_count": 2048 00:17:35.549 } 00:17:35.549 } 00:17:35.549 ] 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "subsystem": "bdev", 00:17:35.549 "config": [ 00:17:35.549 { 00:17:35.549 "method": "bdev_set_options", 00:17:35.549 "params": { 00:17:35.549 "bdev_auto_examine": true, 00:17:35.549 "bdev_io_cache_size": 256, 00:17:35.549 "bdev_io_pool_size": 65535, 00:17:35.549 "iobuf_large_cache_size": 16, 00:17:35.549 "iobuf_small_cache_size": 128 00:17:35.549 } 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "method": "bdev_raid_set_options", 00:17:35.549 "params": { 00:17:35.549 "process_window_size_kb": 1024 00:17:35.549 } 00:17:35.549 }, 00:17:35.549 { 00:17:35.549 "method": "bdev_iscsi_set_options", 00:17:35.550 "params": { 00:17:35.550 "timeout_sec": 30 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "bdev_nvme_set_options", 00:17:35.550 "params": { 00:17:35.550 "action_on_timeout": "none", 00:17:35.550 "allow_accel_sequence": false, 00:17:35.550 "arbitration_burst": 0, 00:17:35.550 "bdev_retry_count": 3, 00:17:35.550 "ctrlr_loss_timeout_sec": 0, 00:17:35.550 "delay_cmd_submit": true, 00:17:35.550 "dhchap_dhgroups": [ 00:17:35.550 "null", 00:17:35.550 "ffdhe2048", 00:17:35.550 "ffdhe3072", 00:17:35.550 "ffdhe4096", 00:17:35.550 "ffdhe6144", 00:17:35.550 "ffdhe8192" 00:17:35.550 ], 00:17:35.550 "dhchap_digests": [ 00:17:35.550 "sha256", 00:17:35.550 "sha384", 00:17:35.550 "sha512" 00:17:35.550 ], 00:17:35.550 "disable_auto_failback": false, 00:17:35.550 "fast_io_fail_timeout_sec": 0, 00:17:35.550 "generate_uuids": false, 00:17:35.550 "high_priority_weight": 0, 00:17:35.550 "io_path_stat": false, 00:17:35.550 "io_queue_requests": 0, 00:17:35.550 "keep_alive_timeout_ms": 10000, 00:17:35.550 "low_priority_weight": 0, 00:17:35.550 "medium_priority_weight": 0, 00:17:35.550 "nvme_adminq_poll_period_us": 10000, 00:17:35.550 "nvme_error_stat": false, 00:17:35.550 "nvme_ioq_poll_period_us": 0, 00:17:35.550 "rdma_cm_event_timeout_ms": 0, 00:17:35.550 "rdma_max_cq_size": 0, 00:17:35.550 "rdma_srq_size": 0, 00:17:35.550 "reconnect_delay_sec": 0, 00:17:35.550 "timeout_admin_us": 0, 00:17:35.550 "timeout_us": 0, 00:17:35.550 "transport_ack_timeout": 0, 00:17:35.550 "transport_retry_count": 4, 00:17:35.550 "transport_tos": 0 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "bdev_nvme_set_hotplug", 00:17:35.550 "params": { 00:17:35.550 "enable": false, 00:17:35.550 "period_us": 100000 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "bdev_malloc_create", 00:17:35.550 "params": { 00:17:35.550 "block_size": 4096, 00:17:35.550 "name": "malloc0", 00:17:35.550 "num_blocks": 8192, 00:17:35.550 "optimal_io_boundary": 0, 00:17:35.550 "physical_block_size": 4096, 00:17:35.550 "uuid": "0fc38d8f-01cf-4e80-b71b-1f5362e12376" 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "bdev_wait_for_examine" 00:17:35.550 } 00:17:35.550 ] 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "subsystem": "nbd", 00:17:35.550 "config": [] 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "subsystem": "scheduler", 00:17:35.550 "config": [ 00:17:35.550 { 00:17:35.550 "method": "framework_set_scheduler", 00:17:35.550 "params": { 00:17:35.550 "name": "static" 00:17:35.550 } 00:17:35.550 } 00:17:35.550 ] 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "subsystem": "nvmf", 00:17:35.550 "config": [ 00:17:35.550 { 00:17:35.550 "method": "nvmf_set_config", 00:17:35.550 "params": { 00:17:35.550 "admin_cmd_passthru": { 00:17:35.550 "identify_ctrlr": false 00:17:35.550 }, 00:17:35.550 "discovery_filter": "match_any" 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_set_max_subsystems", 00:17:35.550 "params": { 00:17:35.550 "max_subsystems": 1024 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_set_crdt", 00:17:35.550 "params": { 00:17:35.550 "crdt1": 0, 00:17:35.550 "crdt2": 0, 00:17:35.550 "crdt3": 0 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_create_transport", 00:17:35.550 "params": { 00:17:35.550 "abort_timeout_sec": 1, 00:17:35.550 "ack_timeout": 0, 00:17:35.550 "buf_cache_size": 4294967295, 00:17:35.550 "c2h_success": false, 00:17:35.550 "data_wr_pool_size": 0, 00:17:35.550 "dif_insert_or_strip": false, 00:17:35.550 "in_capsule_data_size": 4096, 00:17:35.550 "io_unit_size": 131072, 00:17:35.550 "max_aq_depth": 128, 00:17:35.550 "max_io_qpairs_per_ctrlr": 127, 00:17:35.550 "max_io_size": 131072, 00:17:35.550 "max_queue_depth": 128, 00:17:35.550 "num_shared_buffers": 511, 00:17:35.550 "sock_priority": 0, 00:17:35.550 "trtype": "TCP", 00:17:35.550 "zcopy": false 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_create_subsystem", 00:17:35.550 "params": { 00:17:35.550 "allow_any_host": false, 00:17:35.550 "ana_reporting": false, 00:17:35.550 "max_cntlid": 65519, 00:17:35.550 "max_namespaces": 32, 00:17:35.550 "min_cntlid": 1, 00:17:35.550 "model_number": "SPDK bdev Controller", 00:17:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.550 "serial_number": "00000000000000000000" 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_subsystem_add_host", 00:17:35.550 "params": { 00:17:35.550 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.550 "psk": "key0" 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_subsystem_add_ns", 00:17:35.550 "params": { 00:17:35.550 "namespace": { 00:17:35.550 "bdev_name": "malloc0", 00:17:35.550 "nguid": "0FC38D8F01CF4E80B71B1F5362E12376", 00:17:35.550 "no_auto_visible": false, 00:17:35.550 "nsid": 1, 00:17:35.550 "uuid": "0fc38d8f-01cf-4e80-b71b-1f5362e12376" 00:17:35.550 }, 00:17:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:35.550 } 00:17:35.550 }, 00:17:35.550 { 00:17:35.550 "method": "nvmf_subsystem_add_listener", 00:17:35.550 "params": { 00:17:35.550 "listen_address": { 00:17:35.550 "adrfam": "IPv4", 00:17:35.550 "traddr": "10.0.0.2", 00:17:35.550 "trsvcid": "4420", 00:17:35.550 "trtype": "TCP" 00:17:35.550 }, 00:17:35.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.550 "secure_channel": true 00:17:35.550 } 00:17:35.550 } 00:17:35.550 ] 00:17:35.550 } 00:17:35.550 ] 00:17:35.550 }' 00:17:35.550 17:21:05 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:35.810 17:21:05 -- target/tls.sh@264 -- # bperfcfg='{ 00:17:35.810 "subsystems": [ 00:17:35.810 { 00:17:35.810 "subsystem": "keyring", 00:17:35.810 "config": [ 00:17:35.810 { 00:17:35.810 "method": "keyring_file_add_key", 00:17:35.810 "params": { 00:17:35.810 "name": "key0", 00:17:35.810 "path": "/tmp/tmp.LiZCnHE9db" 00:17:35.810 } 00:17:35.810 } 00:17:35.810 ] 00:17:35.810 }, 00:17:35.810 { 00:17:35.810 "subsystem": "iobuf", 00:17:35.810 "config": [ 00:17:35.810 { 00:17:35.810 "method": "iobuf_set_options", 00:17:35.810 "params": { 00:17:35.810 "large_bufsize": 135168, 00:17:35.810 "large_pool_count": 1024, 00:17:35.810 "small_bufsize": 8192, 00:17:35.810 "small_pool_count": 8192 00:17:35.810 } 00:17:35.810 } 00:17:35.810 ] 00:17:35.810 }, 00:17:35.810 { 00:17:35.810 "subsystem": "sock", 00:17:35.810 "config": [ 00:17:35.810 { 00:17:35.810 "method": "sock_impl_set_options", 00:17:35.810 "params": { 00:17:35.810 "enable_ktls": false, 00:17:35.810 "enable_placement_id": 0, 00:17:35.810 "enable_quickack": false, 00:17:35.810 "enable_recv_pipe": true, 00:17:35.810 "enable_zerocopy_send_client": false, 00:17:35.810 "enable_zerocopy_send_server": true, 00:17:35.810 "impl_name": "posix", 00:17:35.810 "recv_buf_size": 2097152, 00:17:35.810 "send_buf_size": 2097152, 00:17:35.810 "tls_version": 0, 00:17:35.810 "zerocopy_threshold": 0 00:17:35.810 } 00:17:35.810 }, 00:17:35.810 { 00:17:35.810 "method": "sock_impl_set_options", 00:17:35.810 "params": { 00:17:35.810 "enable_ktls": false, 00:17:35.810 "enable_placement_id": 0, 00:17:35.810 "enable_quickack": false, 00:17:35.810 "enable_recv_pipe": true, 00:17:35.810 "enable_zerocopy_send_client": false, 00:17:35.810 "enable_zerocopy_send_server": true, 00:17:35.810 "impl_name": "ssl", 00:17:35.810 "recv_buf_size": 4096, 00:17:35.810 "send_buf_size": 4096, 00:17:35.810 "tls_version": 0, 00:17:35.810 "zerocopy_threshold": 0 00:17:35.810 } 00:17:35.810 } 00:17:35.810 ] 00:17:35.810 }, 00:17:35.810 { 00:17:35.810 "subsystem": "vmd", 00:17:35.810 "config": [] 00:17:35.810 }, 00:17:35.811 { 00:17:35.811 "subsystem": "accel", 00:17:35.811 "config": [ 00:17:35.811 { 00:17:35.811 "method": "accel_set_options", 00:17:35.811 "params": { 00:17:35.811 "buf_count": 2048, 00:17:35.811 "large_cache_size": 16, 00:17:35.811 "sequence_count": 2048, 00:17:35.811 "small_cache_size": 128, 00:17:35.811 "task_count": 2048 00:17:35.811 } 00:17:35.811 } 00:17:35.811 ] 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "subsystem": "bdev", 00:17:35.811 "config": [ 00:17:35.811 { 00:17:35.811 "method": "bdev_set_options", 00:17:35.811 "params": { 00:17:35.811 "bdev_auto_examine": true, 00:17:35.811 "bdev_io_cache_size": 256, 00:17:35.811 "bdev_io_pool_size": 65535, 00:17:35.811 "iobuf_large_cache_size": 16, 00:17:35.811 "iobuf_small_cache_size": 128 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_raid_set_options", 00:17:35.811 "params": { 00:17:35.811 "process_window_size_kb": 1024 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_iscsi_set_options", 00:17:35.811 "params": { 00:17:35.811 "timeout_sec": 30 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_nvme_set_options", 00:17:35.811 "params": { 00:17:35.811 "action_on_timeout": "none", 00:17:35.811 "allow_accel_sequence": false, 00:17:35.811 "arbitration_burst": 0, 00:17:35.811 "bdev_retry_count": 3, 00:17:35.811 "ctrlr_loss_timeout_sec": 0, 00:17:35.811 "delay_cmd_submit": true, 00:17:35.811 "dhchap_dhgroups": [ 00:17:35.811 "null", 00:17:35.811 "ffdhe2048", 00:17:35.811 "ffdhe3072", 00:17:35.811 "ffdhe4096", 00:17:35.811 "ffdhe6144", 00:17:35.811 "ffdhe8192" 00:17:35.811 ], 00:17:35.811 "dhchap_digests": [ 00:17:35.811 "sha256", 00:17:35.811 "sha384", 00:17:35.811 "sha512" 00:17:35.811 ], 00:17:35.811 "disable_auto_failback": false, 00:17:35.811 "fast_io_fail_timeout_sec": 0, 00:17:35.811 "generate_uuids": false, 00:17:35.811 "high_priority_weight": 0, 00:17:35.811 "io_path_stat": false, 00:17:35.811 "io_queue_requests": 512, 00:17:35.811 "keep_alive_timeout_ms": 10000, 00:17:35.811 "low_priority_weight": 0, 00:17:35.811 "medium_priority_weight": 0, 00:17:35.811 "nvme_adminq_poll_period_us": 10000, 00:17:35.811 "nvme_error_stat": false, 00:17:35.811 "nvme_ioq_poll_period_us": 0, 00:17:35.811 "rdma_cm_event_timeout_ms": 0, 00:17:35.811 "rdma_max_cq_size": 0, 00:17:35.811 "rdma_srq_size": 0, 00:17:35.811 "reconnect_delay_sec": 0, 00:17:35.811 "timeout_admin_us": 0, 00:17:35.811 "timeout_us": 0, 00:17:35.811 "transport_ack_timeout": 0, 00:17:35.811 "transport_retry_count": 4, 00:17:35.811 "transport_tos": 0 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_nvme_attach_controller", 00:17:35.811 "params": { 00:17:35.811 "adrfam": "IPv4", 00:17:35.811 "ctrlr_loss_timeout_sec": 0, 00:17:35.811 "ddgst": false, 00:17:35.811 "fast_io_fail_timeout_sec": 0, 00:17:35.811 "hdgst": false, 00:17:35.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.811 "name": "nvme0", 00:17:35.811 "prchk_guard": false, 00:17:35.811 "prchk_reftag": false, 00:17:35.811 "psk": "key0", 00:17:35.811 "reconnect_delay_sec": 0, 00:17:35.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.811 "traddr": "10.0.0.2", 00:17:35.811 "trsvcid": "4420", 00:17:35.811 "trtype": "TCP" 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_nvme_set_hotplug", 00:17:35.811 "params": { 00:17:35.811 "enable": false, 00:17:35.811 "period_us": 100000 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_enable_histogram", 00:17:35.811 "params": { 00:17:35.811 "enable": true, 00:17:35.811 "name": "nvme0n1" 00:17:35.811 } 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "method": "bdev_wait_for_examine" 00:17:35.811 } 00:17:35.811 ] 00:17:35.811 }, 00:17:35.811 { 00:17:35.811 "subsystem": "nbd", 00:17:35.811 "config": [] 00:17:35.811 } 00:17:35.811 ] 00:17:35.811 }' 00:17:35.811 17:21:05 -- target/tls.sh@266 -- # killprocess 83021 00:17:35.811 17:21:05 -- common/autotest_common.sh@936 -- # '[' -z 83021 ']' 00:17:35.811 17:21:05 -- common/autotest_common.sh@940 -- # kill -0 83021 00:17:35.811 17:21:05 -- common/autotest_common.sh@941 -- # uname 00:17:35.811 17:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.811 17:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83021 00:17:35.811 killing process with pid 83021 00:17:35.811 Received shutdown signal, test time was about 1.000000 seconds 00:17:35.811 00:17:35.811 Latency(us) 00:17:35.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.811 =================================================================================================================== 00:17:35.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.811 17:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:35.811 17:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:35.811 17:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83021' 00:17:35.811 17:21:05 -- common/autotest_common.sh@955 -- # kill 83021 00:17:35.811 17:21:05 -- common/autotest_common.sh@960 -- # wait 83021 00:17:35.811 17:21:05 -- target/tls.sh@267 -- # killprocess 82983 00:17:35.811 17:21:05 -- common/autotest_common.sh@936 -- # '[' -z 82983 ']' 00:17:35.811 17:21:05 -- common/autotest_common.sh@940 -- # kill -0 82983 00:17:35.811 17:21:05 -- common/autotest_common.sh@941 -- # uname 00:17:35.811 17:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.811 17:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82983 00:17:36.072 killing process with pid 82983 00:17:36.072 17:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.072 17:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.072 17:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82983' 00:17:36.072 17:21:05 -- common/autotest_common.sh@955 -- # kill 82983 00:17:36.072 17:21:05 -- common/autotest_common.sh@960 -- # wait 82983 00:17:36.072 17:21:05 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:36.072 17:21:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:36.072 17:21:05 -- target/tls.sh@269 -- # echo '{ 00:17:36.072 "subsystems": [ 00:17:36.072 { 00:17:36.072 "subsystem": "keyring", 00:17:36.072 "config": [ 00:17:36.072 { 00:17:36.072 "method": "keyring_file_add_key", 00:17:36.072 "params": { 00:17:36.072 "name": "key0", 00:17:36.072 "path": "/tmp/tmp.LiZCnHE9db" 00:17:36.072 } 00:17:36.072 } 00:17:36.072 ] 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "subsystem": "iobuf", 00:17:36.072 "config": [ 00:17:36.072 { 00:17:36.072 "method": "iobuf_set_options", 00:17:36.072 "params": { 00:17:36.072 "large_bufsize": 135168, 00:17:36.072 "large_pool_count": 1024, 00:17:36.072 "small_bufsize": 8192, 00:17:36.072 "small_pool_count": 8192 00:17:36.072 } 00:17:36.072 } 00:17:36.072 ] 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "subsystem": "sock", 00:17:36.072 "config": [ 00:17:36.072 { 00:17:36.072 "method": "sock_impl_set_options", 00:17:36.072 "params": { 00:17:36.072 "enable_ktls": false, 00:17:36.072 "enable_placement_id": 0, 00:17:36.072 "enable_quickack": false, 00:17:36.072 "enable_recv_pipe": true, 00:17:36.072 "enable_zerocopy_send_client": false, 00:17:36.072 "enable_zerocopy_send_server": true, 00:17:36.072 "impl_name": "posix", 00:17:36.072 "recv_buf_size": 2097152, 00:17:36.072 "send_buf_size": 2097152, 00:17:36.072 "tls_version": 0, 00:17:36.072 "zerocopy_threshold": 0 00:17:36.072 } 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "method": "sock_impl_set_options", 00:17:36.072 "params": { 00:17:36.072 "enable_ktls": false, 00:17:36.072 "enable_placement_id": 0, 00:17:36.072 "enable_quickack": false, 00:17:36.072 "enable_recv_pipe": true, 00:17:36.072 "enable_zerocopy_send_client": false, 00:17:36.072 "enable_zerocopy_send_server": true, 00:17:36.072 "impl_name": "ssl", 00:17:36.072 "recv_buf_size": 4096, 00:17:36.072 "send_buf_size": 4096, 00:17:36.072 "tls_version": 0, 00:17:36.072 "zerocopy_threshold": 0 00:17:36.072 } 00:17:36.072 } 00:17:36.072 ] 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "subsystem": "vmd", 00:17:36.072 "config": [] 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "subsystem": "accel", 00:17:36.072 "config": [ 00:17:36.072 { 00:17:36.072 "method": "accel_set_options", 00:17:36.072 "params": { 00:17:36.072 "buf_count": 2048, 00:17:36.072 "large_cache_size": 16, 00:17:36.072 "sequence_count": 2048, 00:17:36.072 "small_cache_size": 128, 00:17:36.072 "task_count": 2048 00:17:36.072 } 00:17:36.072 } 00:17:36.072 ] 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "subsystem": "bdev", 00:17:36.072 "config": [ 00:17:36.072 { 00:17:36.072 "method": "bdev_set_options", 00:17:36.072 "params": { 00:17:36.072 "bdev_auto_examine": true, 00:17:36.072 "bdev_io_cache_size": 256, 00:17:36.072 "bdev_io_pool_size": 65535, 00:17:36.072 "iobuf_large_cache_size": 16, 00:17:36.072 "iobuf_small_cache_size": 128 00:17:36.072 } 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "method": "bdev_raid_set_options", 00:17:36.072 "params": { 00:17:36.072 "process_window_size_kb": 1024 00:17:36.072 } 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "method": "bdev_iscsi_set_options", 00:17:36.072 "params": { 00:17:36.072 "timeout_sec": 30 00:17:36.072 } 00:17:36.072 }, 00:17:36.072 { 00:17:36.072 "method": "bdev_nvme_set_options", 00:17:36.072 "params": { 00:17:36.072 "action_on_timeout": "none", 00:17:36.072 "allow_accel_sequence": false, 00:17:36.072 "arbitration_burst": 0, 00:17:36.072 "bdev_retry_count": 3, 00:17:36.072 "ctrlr_loss_timeout_sec": 0, 00:17:36.072 "delay_cmd_submit": true, 00:17:36.072 "dhchap_dhgroups": [ 00:17:36.072 "null", 00:17:36.072 "ffdhe2048", 00:17:36.072 "ffdhe3072", 00:17:36.072 "ffdhe4096", 00:17:36.072 "ffdhe6144", 00:17:36.072 "ffdhe8192" 00:17:36.072 ], 00:17:36.072 "dhchap_digests": [ 00:17:36.072 "sha256", 00:17:36.072 "sha384", 00:17:36.072 "sha512" 00:17:36.072 ], 00:17:36.072 "disable_auto_failback": false, 00:17:36.072 "fast_io_fail_timeout_sec": 0, 00:17:36.072 "generate_uuids": false, 00:17:36.072 "high_priority_weight": 0, 00:17:36.072 "io_path_stat": false, 00:17:36.072 "io_queue_requests": 0, 00:17:36.072 "keep_alive_timeout_ms": 10000, 00:17:36.072 "low_priority_weight": 0, 00:17:36.072 "medium_priority_weight": 0, 00:17:36.072 "nvme_adminq_poll_period_us": 10000, 00:17:36.072 "nvme_error_stat": false, 00:17:36.072 "nvme_ioq_poll_period_us": 0, 00:17:36.072 "rdma_cm_event_timeout_ms": 0, 00:17:36.072 "rdma_max_cq_size": 0, 00:17:36.072 "rdma_srq_size": 0, 00:17:36.072 "reconnect_delay_sec": 0, 00:17:36.072 "timeout_admin_us": 0, 00:17:36.072 "timeout_us": 0, 00:17:36.072 "transport_ack_timeout": 0, 00:17:36.073 "transport_retry_count": 4, 00:17:36.073 "transport_tos": 0 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "bdev_nvme_set_hotplug", 00:17:36.073 "params": { 00:17:36.073 "enable": false, 00:17:36.073 "period_us": 100000 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "bdev_malloc_create", 00:17:36.073 "params": { 00:17:36.073 "block_size": 4096, 00:17:36.073 "name": "malloc0", 00:17:36.073 "num_blocks": 8192, 00:17:36.073 "optimal_io_boundary": 0, 00:17:36.073 "physical_block_size": 4096, 00:17:36.073 "uuid": "0fc38d8f-01cf-4e80-b71b-1f5362e12376" 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "bdev_wait_for_examine" 00:17:36.073 } 00:17:36.073 ] 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "subsystem": "nbd", 00:17:36.073 "config": [] 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "subsystem": "scheduler", 00:17:36.073 "config": [ 00:17:36.073 { 00:17:36.073 "method": "framework_set_scheduler", 00:17:36.073 "params": { 00:17:36.073 "name": "static" 00:17:36.073 } 00:17:36.073 } 00:17:36.073 ] 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "subsystem": "nvmf", 00:17:36.073 "config": [ 00:17:36.073 { 00:17:36.073 "method": "nvmf_set_config", 00:17:36.073 "params": { 00:17:36.073 "admin_cmd_passthru": { 00:17:36.073 "identify_ctrlr": false 00:17:36.073 }, 00:17:36.073 "discovery_filter": "match_any" 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_set_max_subsystems", 00:17:36.073 "params": { 00:17:36.073 "max_subsystems": 1024 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_set_crdt", 00:17:36.073 "params": { 00:17:36.073 "crdt1": 0, 00:17:36.073 "crdt2": 0, 00:17:36.073 "crdt3": 0 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_create_transport", 00:17:36.073 "params": { 00:17:36.073 "abort_timeout_sec": 1, 00:17:36.073 "ack_timeout": 0, 00:17:36.073 "buf_cache_size": 4294967295, 00:17:36.073 "c2h_success": false, 00:17:36.073 "data_wr_pool_size": 0, 00:17:36.073 "dif_insert_or_strip": false, 00:17:36.073 "in_capsule_data_size": 4096, 00:17:36.073 "io_unit_size": 131072, 00:17:36.073 "max_aq_depth": 128, 00:17:36.073 "max_io_qpairs_per_ctrlr": 127, 00:17:36.073 "max_io_size": 131072, 00:17:36.073 "max_queue_depth": 128, 00:17:36.073 "num_shared_buffers": 511, 00:17:36.073 "sock_priority": 0, 00:17:36.073 "trtype": "TCP", 00:17:36.073 "zcopy": false 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_create_subsystem", 00:17:36.073 "params": { 00:17:36.073 "allow_any_host": false, 00:17:36.073 "ana_reporting": false, 00:17:36.073 "max_cntlid": 65519, 00:17:36.073 "max_namespaces": 32, 00:17:36.073 "min_cntlid": 1, 00:17:36.073 "model_number": "SPDK bdev Controller", 00:17:36.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.073 "serial_number": "00000000000000000000" 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_subsystem_add_host", 00:17:36.073 "params": { 00:17:36.073 "host": "nqn.2016-06.io.spdk:host1", 00:17:36.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.073 "psk": "key0" 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_subsystem_add_ns", 00:17:36.073 "params": { 00:17:36.073 "namespace": { 00:17:36.073 "bdev_name": "malloc0", 00:17:36.073 "nguid": "0FC38D8F01CF4E80B71B1F5362E12376", 00:17:36.073 "no_auto_visible": false, 00:17:36.073 "nsid": 1, 00:17:36.073 "uuid": "0fc38d8f-01cf-4e80-b71b-1f5362e12376" 00:17:36.073 }, 00:17:36.073 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:36.073 } 00:17:36.073 }, 00:17:36.073 { 00:17:36.073 "method": "nvmf_subsystem_add_listener", 00:17:36.073 "params": { 00:17:36.073 "listen_address": { 00:17:36.073 "adrfam": "IPv4", 00:17:36.073 "traddr": "10.0.0.2", 00:17:36.073 "trsvcid": "4420", 00:17:36.073 "trtype": "TCP" 00:17:36.073 }, 00:17:36.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.073 "secure_channel": true 00:17:36.073 } 00:17:36.073 } 00:17:36.073 ] 00:17:36.073 } 00:17:36.073 ] 00:17:36.073 }' 00:17:36.073 17:21:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:36.073 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 17:21:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:36.073 17:21:05 -- nvmf/common.sh@470 -- # nvmfpid=83088 00:17:36.073 17:21:05 -- nvmf/common.sh@471 -- # waitforlisten 83088 00:17:36.073 17:21:05 -- common/autotest_common.sh@817 -- # '[' -z 83088 ']' 00:17:36.073 17:21:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.073 17:21:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.073 17:21:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.073 17:21:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.073 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 [2024-04-25 17:21:06.028231] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:36.073 [2024-04-25 17:21:06.028318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.333 [2024-04-25 17:21:06.159963] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.333 [2024-04-25 17:21:06.211232] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.333 [2024-04-25 17:21:06.211275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.333 [2024-04-25 17:21:06.211300] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.333 [2024-04-25 17:21:06.211307] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.333 [2024-04-25 17:21:06.211313] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.333 [2024-04-25 17:21:06.211387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.592 [2024-04-25 17:21:06.391160] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.592 [2024-04-25 17:21:06.423092] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.592 [2024-04-25 17:21:06.423264] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.160 17:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.160 17:21:07 -- common/autotest_common.sh@850 -- # return 0 00:17:37.160 17:21:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:37.160 17:21:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:37.160 17:21:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.160 17:21:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.160 17:21:07 -- target/tls.sh@272 -- # bdevperf_pid=83139 00:17:37.160 17:21:07 -- target/tls.sh@273 -- # waitforlisten 83139 /var/tmp/bdevperf.sock 00:17:37.160 17:21:07 -- common/autotest_common.sh@817 -- # '[' -z 83139 ']' 00:17:37.160 17:21:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.160 17:21:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.160 17:21:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.160 17:21:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.160 17:21:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.160 17:21:07 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:37.160 17:21:07 -- target/tls.sh@270 -- # echo '{ 00:17:37.160 "subsystems": [ 00:17:37.160 { 00:17:37.160 "subsystem": "keyring", 00:17:37.160 "config": [ 00:17:37.161 { 00:17:37.161 "method": "keyring_file_add_key", 00:17:37.161 "params": { 00:17:37.161 "name": "key0", 00:17:37.161 "path": "/tmp/tmp.LiZCnHE9db" 00:17:37.161 } 00:17:37.161 } 00:17:37.161 ] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "iobuf", 00:17:37.161 "config": [ 00:17:37.161 { 00:17:37.161 "method": "iobuf_set_options", 00:17:37.161 "params": { 00:17:37.161 "large_bufsize": 135168, 00:17:37.161 "large_pool_count": 1024, 00:17:37.161 "small_bufsize": 8192, 00:17:37.161 "small_pool_count": 8192 00:17:37.161 } 00:17:37.161 } 00:17:37.161 ] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "sock", 00:17:37.161 "config": [ 00:17:37.161 { 00:17:37.161 "method": "sock_impl_set_options", 00:17:37.161 "params": { 00:17:37.161 "enable_ktls": false, 00:17:37.161 "enable_placement_id": 0, 00:17:37.161 "enable_quickack": false, 00:17:37.161 "enable_recv_pipe": true, 00:17:37.161 "enable_zerocopy_send_client": false, 00:17:37.161 "enable_zerocopy_send_server": true, 00:17:37.161 "impl_name": "posix", 00:17:37.161 "recv_buf_size": 2097152, 00:17:37.161 "send_buf_size": 2097152, 00:17:37.161 "tls_version": 0, 00:17:37.161 "zerocopy_threshold": 0 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "sock_impl_set_options", 00:17:37.161 "params": { 00:17:37.161 "enable_ktls": false, 00:17:37.161 "enable_placement_id": 0, 00:17:37.161 "enable_quickack": false, 00:17:37.161 "enable_recv_pipe": true, 00:17:37.161 "enable_zerocopy_send_client": false, 00:17:37.161 "enable_zerocopy_send_server": true, 00:17:37.161 "impl_name": "ssl", 00:17:37.161 "recv_buf_size": 4096, 00:17:37.161 "send_buf_size": 4096, 00:17:37.161 "tls_version": 0, 00:17:37.161 "zerocopy_threshold": 0 00:17:37.161 } 00:17:37.161 } 00:17:37.161 ] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "vmd", 00:17:37.161 "config": [] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "accel", 00:17:37.161 "config": [ 00:17:37.161 { 00:17:37.161 "method": "accel_set_options", 00:17:37.161 "params": { 00:17:37.161 "buf_count": 2048, 00:17:37.161 "large_cache_size": 16, 00:17:37.161 "sequence_count": 2048, 00:17:37.161 "small_cache_size": 128, 00:17:37.161 "task_count": 2048 00:17:37.161 } 00:17:37.161 } 00:17:37.161 ] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "bdev", 00:17:37.161 "config": [ 00:17:37.161 { 00:17:37.161 "method": "bdev_set_options", 00:17:37.161 "params": { 00:17:37.161 "bdev_auto_examine": true, 00:17:37.161 "bdev_io_cache_size": 256, 00:17:37.161 "bdev_io_pool_size": 65535, 00:17:37.161 "iobuf_large_cache_size": 16, 00:17:37.161 "iobuf_small_cache_size": 128 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_raid_set_options", 00:17:37.161 "params": { 00:17:37.161 "process_window_size_kb": 1024 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_iscsi_set_options", 00:17:37.161 "params": { 00:17:37.161 "timeout_sec": 30 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_nvme_set_options", 00:17:37.161 "params": { 00:17:37.161 "action_on_timeout": "none", 00:17:37.161 "allow_accel_sequence": false, 00:17:37.161 "arbitration_burst": 0, 00:17:37.161 "bdev_retry_count": 3, 00:17:37.161 "ctrlr_loss_timeout_sec": 0, 00:17:37.161 "delay_cmd_submit": true, 00:17:37.161 "dhchap_dhgroups": [ 00:17:37.161 "null", 00:17:37.161 "ffdhe2048", 00:17:37.161 "ffdhe3072", 00:17:37.161 "ffdhe4096", 00:17:37.161 "ffdhe6144", 00:17:37.161 "ffdhe8192" 00:17:37.161 ], 00:17:37.161 "dhchap_digests": [ 00:17:37.161 "sha256", 00:17:37.161 "sha384", 00:17:37.161 "sha512" 00:17:37.161 ], 00:17:37.161 "disable_auto_failback": false, 00:17:37.161 "fast_io_fail_timeout_sec": 0, 00:17:37.161 "generate_uuids": false, 00:17:37.161 "high_priority_weight": 0, 00:17:37.161 "io_path_stat": false, 00:17:37.161 "io_queue_requests": 512, 00:17:37.161 "keep_alive_timeout_ms": 10000, 00:17:37.161 "low_priority_weight": 0, 00:17:37.161 "medium_priority_weight": 0, 00:17:37.161 "nvme_adminq_poll_period_us": 10000, 00:17:37.161 "nvme_error_stat": false, 00:17:37.161 "nvme_ioq_poll_period_us": 0, 00:17:37.161 "rdma_cm_event_timeout_ms": 0, 00:17:37.161 "rdma_max_cq_size": 0, 00:17:37.161 "rdma_srq_size": 0, 00:17:37.161 "reconnect_delay_sec": 0, 00:17:37.161 "timeout_admin_us": 0, 00:17:37.161 "timeout_us": 0, 00:17:37.161 "transport_ack_timeout": 0, 00:17:37.161 "transport_retry_count": 4, 00:17:37.161 "transport_tos": 0 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_nvme_attach_controller", 00:17:37.161 "params": { 00:17:37.161 "adrfam": "IPv4", 00:17:37.161 "ctrlr_loss_timeout_sec": 0, 00:17:37.161 "ddgst": false, 00:17:37.161 "fast_io_fail_timeout_sec": 0, 00:17:37.161 "hdgst": false, 00:17:37.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.161 "name": "nvme0", 00:17:37.161 "prchk_guard": false, 00:17:37.161 "prchk_reftag": false, 00:17:37.161 "psk": "key0", 00:17:37.161 "reconnect_delay_sec": 0, 00:17:37.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.161 "traddr": "10.0.0.2", 00:17:37.161 "trsvcid": "4420", 00:17:37.161 "trtype": "TCP" 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_nvme_set_hotplug", 00:17:37.161 "params": { 00:17:37.161 "enable": false, 00:17:37.161 "period_us": 100000 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_enable_histogram", 00:17:37.161 "params": { 00:17:37.161 "enable": true, 00:17:37.161 "name": "nvme0n1" 00:17:37.161 } 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "method": "bdev_wait_for_examine" 00:17:37.161 } 00:17:37.161 ] 00:17:37.161 }, 00:17:37.161 { 00:17:37.161 "subsystem": "nbd", 00:17:37.162 "config": [] 00:17:37.162 } 00:17:37.162 ] 00:17:37.162 }' 00:17:37.162 [2024-04-25 17:21:07.104279] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:37.162 [2024-04-25 17:21:07.104389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83139 ] 00:17:37.421 [2024-04-25 17:21:07.239115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.421 [2024-04-25 17:21:07.290803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.680 [2024-04-25 17:21:07.414444] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.247 17:21:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.247 17:21:08 -- common/autotest_common.sh@850 -- # return 0 00:17:38.247 17:21:08 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:38.247 17:21:08 -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:38.504 17:21:08 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.504 17:21:08 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.504 Running I/O for 1 seconds... 00:17:39.439 00:17:39.439 Latency(us) 00:17:39.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.439 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:39.439 Verification LBA range: start 0x0 length 0x2000 00:17:39.439 nvme0n1 : 1.02 4273.54 16.69 0.00 0.00 29652.85 6076.97 19541.64 00:17:39.439 =================================================================================================================== 00:17:39.439 Total : 4273.54 16.69 0.00 0.00 29652.85 6076.97 19541.64 00:17:39.439 0 00:17:39.439 17:21:09 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:39.439 17:21:09 -- target/tls.sh@279 -- # cleanup 00:17:39.439 17:21:09 -- target/tls.sh@15 -- # process_shm --id 0 00:17:39.439 17:21:09 -- common/autotest_common.sh@794 -- # type=--id 00:17:39.439 17:21:09 -- common/autotest_common.sh@795 -- # id=0 00:17:39.439 17:21:09 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:39.439 17:21:09 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:39.698 17:21:09 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:39.698 17:21:09 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:39.698 17:21:09 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:39.698 17:21:09 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:39.698 nvmf_trace.0 00:17:39.698 17:21:09 -- common/autotest_common.sh@809 -- # return 0 00:17:39.698 17:21:09 -- target/tls.sh@16 -- # killprocess 83139 00:17:39.698 17:21:09 -- common/autotest_common.sh@936 -- # '[' -z 83139 ']' 00:17:39.698 17:21:09 -- common/autotest_common.sh@940 -- # kill -0 83139 00:17:39.698 17:21:09 -- common/autotest_common.sh@941 -- # uname 00:17:39.698 17:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.698 17:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83139 00:17:39.698 killing process with pid 83139 00:17:39.698 Received shutdown signal, test time was about 1.000000 seconds 00:17:39.698 00:17:39.698 Latency(us) 00:17:39.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.698 =================================================================================================================== 00:17:39.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.698 17:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:39.698 17:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:39.698 17:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83139' 00:17:39.698 17:21:09 -- common/autotest_common.sh@955 -- # kill 83139 00:17:39.698 17:21:09 -- common/autotest_common.sh@960 -- # wait 83139 00:17:39.956 17:21:09 -- target/tls.sh@17 -- # nvmftestfini 00:17:39.957 17:21:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:39.957 17:21:09 -- nvmf/common.sh@117 -- # sync 00:17:39.957 17:21:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.957 17:21:09 -- nvmf/common.sh@120 -- # set +e 00:17:39.957 17:21:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.957 17:21:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.957 rmmod nvme_tcp 00:17:39.957 rmmod nvme_fabrics 00:17:39.957 rmmod nvme_keyring 00:17:39.957 17:21:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.957 17:21:09 -- nvmf/common.sh@124 -- # set -e 00:17:39.957 17:21:09 -- nvmf/common.sh@125 -- # return 0 00:17:39.957 17:21:09 -- nvmf/common.sh@478 -- # '[' -n 83088 ']' 00:17:39.957 17:21:09 -- nvmf/common.sh@479 -- # killprocess 83088 00:17:39.957 17:21:09 -- common/autotest_common.sh@936 -- # '[' -z 83088 ']' 00:17:39.957 17:21:09 -- common/autotest_common.sh@940 -- # kill -0 83088 00:17:39.957 17:21:09 -- common/autotest_common.sh@941 -- # uname 00:17:39.957 17:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.957 17:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83088 00:17:39.957 killing process with pid 83088 00:17:39.957 17:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:39.957 17:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:39.957 17:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83088' 00:17:39.957 17:21:09 -- common/autotest_common.sh@955 -- # kill 83088 00:17:39.957 17:21:09 -- common/autotest_common.sh@960 -- # wait 83088 00:17:40.215 17:21:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:40.215 17:21:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:40.215 17:21:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:40.215 17:21:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.215 17:21:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.215 17:21:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.215 17:21:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.215 17:21:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.215 17:21:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.215 17:21:10 -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZFn3ATX4XS /tmp/tmp.NEdqRm0FHH /tmp/tmp.LiZCnHE9db 00:17:40.215 00:17:40.215 real 1m21.731s 00:17:40.215 user 2m7.888s 00:17:40.215 sys 0m27.202s 00:17:40.215 17:21:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:40.215 ************************************ 00:17:40.215 END TEST nvmf_tls 00:17:40.215 ************************************ 00:17:40.215 17:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.215 17:21:10 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:40.215 17:21:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:40.215 17:21:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.215 17:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.215 ************************************ 00:17:40.215 START TEST nvmf_fips 00:17:40.215 ************************************ 00:17:40.215 17:21:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:40.475 * Looking for test storage... 00:17:40.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:40.475 17:21:10 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.475 17:21:10 -- nvmf/common.sh@7 -- # uname -s 00:17:40.475 17:21:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.475 17:21:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.475 17:21:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.475 17:21:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.475 17:21:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.475 17:21:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.475 17:21:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.475 17:21:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.475 17:21:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.475 17:21:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.475 17:21:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:40.475 17:21:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:40.475 17:21:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.475 17:21:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.475 17:21:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.475 17:21:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.475 17:21:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.475 17:21:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.475 17:21:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.475 17:21:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.475 17:21:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.475 17:21:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.475 17:21:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.475 17:21:10 -- paths/export.sh@5 -- # export PATH 00:17:40.475 17:21:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.475 17:21:10 -- nvmf/common.sh@47 -- # : 0 00:17:40.475 17:21:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.475 17:21:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.475 17:21:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.475 17:21:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.475 17:21:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.475 17:21:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.475 17:21:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.475 17:21:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.475 17:21:10 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.475 17:21:10 -- fips/fips.sh@89 -- # check_openssl_version 00:17:40.475 17:21:10 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:40.475 17:21:10 -- fips/fips.sh@85 -- # openssl version 00:17:40.475 17:21:10 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:40.475 17:21:10 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:40.475 17:21:10 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:40.475 17:21:10 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:40.475 17:21:10 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:40.475 17:21:10 -- scripts/common.sh@333 -- # IFS=.-: 00:17:40.475 17:21:10 -- scripts/common.sh@333 -- # read -ra ver1 00:17:40.475 17:21:10 -- scripts/common.sh@334 -- # IFS=.-: 00:17:40.475 17:21:10 -- scripts/common.sh@334 -- # read -ra ver2 00:17:40.475 17:21:10 -- scripts/common.sh@335 -- # local 'op=>=' 00:17:40.475 17:21:10 -- scripts/common.sh@337 -- # ver1_l=3 00:17:40.475 17:21:10 -- scripts/common.sh@338 -- # ver2_l=3 00:17:40.475 17:21:10 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:40.475 17:21:10 -- scripts/common.sh@341 -- # case "$op" in 00:17:40.475 17:21:10 -- scripts/common.sh@345 -- # : 1 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # decimal 3 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=3 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 3 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # ver1[v]=3 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # decimal 3 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=3 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 3 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # ver2[v]=3 00:17:40.475 17:21:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:40.475 17:21:10 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v++ )) 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # decimal 0 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=0 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 0 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # ver1[v]=0 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # decimal 0 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=0 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 0 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # ver2[v]=0 00:17:40.475 17:21:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:40.475 17:21:10 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v++ )) 00:17:40.475 17:21:10 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # decimal 9 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=9 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 9 00:17:40.475 17:21:10 -- scripts/common.sh@362 -- # ver1[v]=9 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # decimal 0 00:17:40.475 17:21:10 -- scripts/common.sh@350 -- # local d=0 00:17:40.475 17:21:10 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:40.475 17:21:10 -- scripts/common.sh@352 -- # echo 0 00:17:40.475 17:21:10 -- scripts/common.sh@363 -- # ver2[v]=0 00:17:40.475 17:21:10 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:40.475 17:21:10 -- scripts/common.sh@364 -- # return 0 00:17:40.475 17:21:10 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:40.475 17:21:10 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:40.475 17:21:10 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:40.475 17:21:10 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:40.476 17:21:10 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:40.476 17:21:10 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:40.476 17:21:10 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:40.476 17:21:10 -- fips/fips.sh@113 -- # build_openssl_config 00:17:40.476 17:21:10 -- fips/fips.sh@37 -- # cat 00:17:40.476 17:21:10 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:40.476 17:21:10 -- fips/fips.sh@58 -- # cat - 00:17:40.476 17:21:10 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:40.476 17:21:10 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:40.476 17:21:10 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:40.476 17:21:10 -- fips/fips.sh@116 -- # openssl list -providers 00:17:40.476 17:21:10 -- fips/fips.sh@116 -- # grep name 00:17:40.476 17:21:10 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:40.476 17:21:10 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:40.476 17:21:10 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:40.476 17:21:10 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:40.476 17:21:10 -- fips/fips.sh@127 -- # : 00:17:40.476 17:21:10 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.476 17:21:10 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:40.476 17:21:10 -- common/autotest_common.sh@626 -- # local arg=openssl 00:17:40.476 17:21:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.476 17:21:10 -- common/autotest_common.sh@630 -- # type -t openssl 00:17:40.476 17:21:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.476 17:21:10 -- common/autotest_common.sh@632 -- # type -P openssl 00:17:40.476 17:21:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.476 17:21:10 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:17:40.476 17:21:10 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:17:40.476 17:21:10 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:17:40.476 Error setting digest 00:17:40.476 00E242D8E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:40.476 00E242D8E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:40.476 17:21:10 -- common/autotest_common.sh@641 -- # es=1 00:17:40.476 17:21:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:40.476 17:21:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:40.476 17:21:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:40.476 17:21:10 -- fips/fips.sh@130 -- # nvmftestinit 00:17:40.476 17:21:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:40.476 17:21:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.476 17:21:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:40.476 17:21:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:40.476 17:21:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:40.476 17:21:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.476 17:21:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.476 17:21:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.476 17:21:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:40.476 17:21:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:40.476 17:21:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:40.476 17:21:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:40.476 17:21:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:40.476 17:21:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:40.476 17:21:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.476 17:21:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.476 17:21:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.476 17:21:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:40.476 17:21:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.476 17:21:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.476 17:21:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.476 17:21:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.476 17:21:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.476 17:21:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.476 17:21:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.476 17:21:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.476 17:21:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:40.476 17:21:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:40.476 Cannot find device "nvmf_tgt_br" 00:17:40.476 17:21:10 -- nvmf/common.sh@155 -- # true 00:17:40.476 17:21:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.735 Cannot find device "nvmf_tgt_br2" 00:17:40.735 17:21:10 -- nvmf/common.sh@156 -- # true 00:17:40.735 17:21:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:40.735 17:21:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:40.735 Cannot find device "nvmf_tgt_br" 00:17:40.735 17:21:10 -- nvmf/common.sh@158 -- # true 00:17:40.735 17:21:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:40.735 Cannot find device "nvmf_tgt_br2" 00:17:40.735 17:21:10 -- nvmf/common.sh@159 -- # true 00:17:40.735 17:21:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:40.735 17:21:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:40.735 17:21:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:40.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.735 17:21:10 -- nvmf/common.sh@162 -- # true 00:17:40.735 17:21:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:40.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:40.735 17:21:10 -- nvmf/common.sh@163 -- # true 00:17:40.735 17:21:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:40.735 17:21:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:40.735 17:21:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:40.735 17:21:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:40.735 17:21:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:40.735 17:21:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:40.735 17:21:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:40.735 17:21:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:40.735 17:21:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:40.735 17:21:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:40.735 17:21:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:40.735 17:21:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:40.735 17:21:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:40.735 17:21:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:40.735 17:21:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:40.735 17:21:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:40.735 17:21:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:40.735 17:21:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:40.735 17:21:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:40.735 17:21:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:40.994 17:21:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:40.994 17:21:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:40.994 17:21:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:40.994 17:21:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:40.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:40.994 00:17:40.994 --- 10.0.0.2 ping statistics --- 00:17:40.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.994 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:40.994 17:21:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:40.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:40.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:40.994 00:17:40.994 --- 10.0.0.3 ping statistics --- 00:17:40.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.994 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:40.994 17:21:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:40.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:40.994 00:17:40.994 --- 10.0.0.1 ping statistics --- 00:17:40.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.994 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:40.994 17:21:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.994 17:21:10 -- nvmf/common.sh@422 -- # return 0 00:17:40.994 17:21:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:40.994 17:21:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.994 17:21:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:40.994 17:21:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:40.994 17:21:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.994 17:21:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:40.994 17:21:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:40.994 17:21:10 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:40.994 17:21:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:40.994 17:21:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:40.994 17:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.994 17:21:10 -- nvmf/common.sh@470 -- # nvmfpid=83421 00:17:40.994 17:21:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.994 17:21:10 -- nvmf/common.sh@471 -- # waitforlisten 83421 00:17:40.994 17:21:10 -- common/autotest_common.sh@817 -- # '[' -z 83421 ']' 00:17:40.994 17:21:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.994 17:21:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.994 17:21:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.994 17:21:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.994 17:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.994 [2024-04-25 17:21:10.864428] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:40.994 [2024-04-25 17:21:10.864725] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.253 [2024-04-25 17:21:11.004993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.253 [2024-04-25 17:21:11.075079] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.253 [2024-04-25 17:21:11.075354] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.253 [2024-04-25 17:21:11.075525] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.253 [2024-04-25 17:21:11.075722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.253 [2024-04-25 17:21:11.075855] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.253 [2024-04-25 17:21:11.075911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.822 17:21:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.822 17:21:11 -- common/autotest_common.sh@850 -- # return 0 00:17:41.822 17:21:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:41.822 17:21:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:41.822 17:21:11 -- common/autotest_common.sh@10 -- # set +x 00:17:42.082 17:21:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.082 17:21:11 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:42.082 17:21:11 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:42.082 17:21:11 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.082 17:21:11 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:42.082 17:21:11 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.082 17:21:11 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.082 17:21:11 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:42.082 17:21:11 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.082 [2024-04-25 17:21:12.049980] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.341 [2024-04-25 17:21:12.065952] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:42.341 [2024-04-25 17:21:12.066139] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.341 [2024-04-25 17:21:12.091262] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:42.341 malloc0 00:17:42.341 17:21:12 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.341 17:21:12 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.341 17:21:12 -- fips/fips.sh@147 -- # bdevperf_pid=83475 00:17:42.341 17:21:12 -- fips/fips.sh@148 -- # waitforlisten 83475 /var/tmp/bdevperf.sock 00:17:42.341 17:21:12 -- common/autotest_common.sh@817 -- # '[' -z 83475 ']' 00:17:42.341 17:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.341 17:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.341 17:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.341 17:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.341 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.341 [2024-04-25 17:21:12.185657] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:42.341 [2024-04-25 17:21:12.185773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83475 ] 00:17:42.341 [2024-04-25 17:21:12.316363] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.601 [2024-04-25 17:21:12.378123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.169 17:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.169 17:21:13 -- common/autotest_common.sh@850 -- # return 0 00:17:43.169 17:21:13 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:43.428 [2024-04-25 17:21:13.269937] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.428 [2024-04-25 17:21:13.270037] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:43.428 TLSTESTn1 00:17:43.428 17:21:13 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.687 Running I/O for 10 seconds... 00:17:53.666 00:17:53.666 Latency(us) 00:17:53.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.666 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.666 Verification LBA range: start 0x0 length 0x2000 00:17:53.666 TLSTESTn1 : 10.02 4399.84 17.19 0.00 0.00 29032.08 5689.72 23950.43 00:17:53.666 =================================================================================================================== 00:17:53.666 Total : 4399.84 17.19 0.00 0.00 29032.08 5689.72 23950.43 00:17:53.666 0 00:17:53.666 17:21:23 -- fips/fips.sh@1 -- # cleanup 00:17:53.666 17:21:23 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:53.666 17:21:23 -- common/autotest_common.sh@794 -- # type=--id 00:17:53.666 17:21:23 -- common/autotest_common.sh@795 -- # id=0 00:17:53.666 17:21:23 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:53.666 17:21:23 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:53.666 17:21:23 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:53.666 17:21:23 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:53.666 17:21:23 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:53.666 17:21:23 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:53.666 nvmf_trace.0 00:17:53.666 17:21:23 -- common/autotest_common.sh@809 -- # return 0 00:17:53.666 17:21:23 -- fips/fips.sh@16 -- # killprocess 83475 00:17:53.666 17:21:23 -- common/autotest_common.sh@936 -- # '[' -z 83475 ']' 00:17:53.666 17:21:23 -- common/autotest_common.sh@940 -- # kill -0 83475 00:17:53.666 17:21:23 -- common/autotest_common.sh@941 -- # uname 00:17:53.666 17:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.666 17:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83475 00:17:53.666 17:21:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:53.666 17:21:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:53.666 killing process with pid 83475 00:17:53.666 17:21:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83475' 00:17:53.666 17:21:23 -- common/autotest_common.sh@955 -- # kill 83475 00:17:53.666 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.666 00:17:53.666 Latency(us) 00:17:53.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.666 =================================================================================================================== 00:17:53.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.666 [2024-04-25 17:21:23.594166] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.666 17:21:23 -- common/autotest_common.sh@960 -- # wait 83475 00:17:53.924 17:21:23 -- fips/fips.sh@17 -- # nvmftestfini 00:17:53.924 17:21:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:53.924 17:21:23 -- nvmf/common.sh@117 -- # sync 00:17:53.924 17:21:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.924 17:21:23 -- nvmf/common.sh@120 -- # set +e 00:17:53.924 17:21:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.924 17:21:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.924 rmmod nvme_tcp 00:17:53.924 rmmod nvme_fabrics 00:17:53.924 rmmod nvme_keyring 00:17:53.924 17:21:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.924 17:21:23 -- nvmf/common.sh@124 -- # set -e 00:17:53.924 17:21:23 -- nvmf/common.sh@125 -- # return 0 00:17:53.924 17:21:23 -- nvmf/common.sh@478 -- # '[' -n 83421 ']' 00:17:53.924 17:21:23 -- nvmf/common.sh@479 -- # killprocess 83421 00:17:53.924 17:21:23 -- common/autotest_common.sh@936 -- # '[' -z 83421 ']' 00:17:53.924 17:21:23 -- common/autotest_common.sh@940 -- # kill -0 83421 00:17:53.924 17:21:23 -- common/autotest_common.sh@941 -- # uname 00:17:53.924 17:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.924 17:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83421 00:17:53.924 17:21:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:53.924 killing process with pid 83421 00:17:53.924 17:21:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:53.924 17:21:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83421' 00:17:53.924 17:21:23 -- common/autotest_common.sh@955 -- # kill 83421 00:17:53.924 [2024-04-25 17:21:23.871968] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:53.924 17:21:23 -- common/autotest_common.sh@960 -- # wait 83421 00:17:54.182 17:21:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:54.182 17:21:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:54.182 17:21:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:54.182 17:21:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:54.182 17:21:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:54.182 17:21:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.182 17:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.182 17:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.182 17:21:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:54.182 17:21:24 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:54.182 00:17:54.182 real 0m13.955s 00:17:54.182 user 0m18.647s 00:17:54.182 sys 0m5.664s 00:17:54.182 17:21:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:54.182 ************************************ 00:17:54.182 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:17:54.182 END TEST nvmf_fips 00:17:54.182 ************************************ 00:17:54.182 17:21:24 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:17:54.182 17:21:24 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:54.182 17:21:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:54.182 17:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:54.182 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:17:54.441 ************************************ 00:17:54.441 START TEST nvmf_fuzz 00:17:54.441 ************************************ 00:17:54.441 17:21:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:54.441 * Looking for test storage... 00:17:54.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:54.441 17:21:24 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.441 17:21:24 -- nvmf/common.sh@7 -- # uname -s 00:17:54.441 17:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.441 17:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.441 17:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.441 17:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.441 17:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.441 17:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.441 17:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.441 17:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.441 17:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.441 17:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:54.441 17:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:54.441 17:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.441 17:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.441 17:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:54.441 17:21:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.441 17:21:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.441 17:21:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.441 17:21:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.441 17:21:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.441 17:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.441 17:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.441 17:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.441 17:21:24 -- paths/export.sh@5 -- # export PATH 00:17:54.441 17:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.441 17:21:24 -- nvmf/common.sh@47 -- # : 0 00:17:54.441 17:21:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:54.441 17:21:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:54.441 17:21:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.441 17:21:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.441 17:21:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.441 17:21:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:54.441 17:21:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:54.441 17:21:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:54.441 17:21:24 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:54.441 17:21:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:54.441 17:21:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.441 17:21:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:54.441 17:21:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:54.441 17:21:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:54.441 17:21:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.441 17:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.441 17:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.441 17:21:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:54.441 17:21:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:54.441 17:21:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.441 17:21:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.442 17:21:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:54.442 17:21:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:54.442 17:21:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:54.442 17:21:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:54.442 17:21:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:54.442 17:21:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.442 17:21:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:54.442 17:21:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:54.442 17:21:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:54.442 17:21:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:54.442 17:21:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:54.442 17:21:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:54.442 Cannot find device "nvmf_tgt_br" 00:17:54.442 17:21:24 -- nvmf/common.sh@155 -- # true 00:17:54.442 17:21:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:54.442 Cannot find device "nvmf_tgt_br2" 00:17:54.442 17:21:24 -- nvmf/common.sh@156 -- # true 00:17:54.442 17:21:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:54.442 17:21:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:54.442 Cannot find device "nvmf_tgt_br" 00:17:54.442 17:21:24 -- nvmf/common.sh@158 -- # true 00:17:54.442 17:21:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:54.442 Cannot find device "nvmf_tgt_br2" 00:17:54.442 17:21:24 -- nvmf/common.sh@159 -- # true 00:17:54.442 17:21:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:54.700 17:21:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:54.700 17:21:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:54.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.700 17:21:24 -- nvmf/common.sh@162 -- # true 00:17:54.700 17:21:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:54.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:54.700 17:21:24 -- nvmf/common.sh@163 -- # true 00:17:54.700 17:21:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:54.700 17:21:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:54.700 17:21:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:54.700 17:21:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:54.700 17:21:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:54.700 17:21:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:54.700 17:21:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:54.700 17:21:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:54.700 17:21:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:54.700 17:21:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:54.700 17:21:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:54.700 17:21:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:54.700 17:21:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:54.700 17:21:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:54.700 17:21:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:54.700 17:21:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:54.700 17:21:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:54.700 17:21:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:54.700 17:21:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:54.700 17:21:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:54.700 17:21:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:54.700 17:21:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:54.700 17:21:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.700 17:21:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:54.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:54.700 00:17:54.700 --- 10.0.0.2 ping statistics --- 00:17:54.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.700 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:54.700 17:21:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:54.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:54.700 00:17:54.700 --- 10.0.0.3 ping statistics --- 00:17:54.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.700 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:54.700 17:21:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:54.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:54.700 00:17:54.700 --- 10.0.0.1 ping statistics --- 00:17:54.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.700 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:54.700 17:21:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.700 17:21:24 -- nvmf/common.sh@422 -- # return 0 00:17:54.700 17:21:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:54.700 17:21:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.700 17:21:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:54.700 17:21:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:54.700 17:21:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.700 17:21:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:54.700 17:21:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:54.700 17:21:24 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=83821 00:17:54.700 17:21:24 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:54.700 17:21:24 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:54.700 17:21:24 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 83821 00:17:54.700 17:21:24 -- common/autotest_common.sh@817 -- # '[' -z 83821 ']' 00:17:54.700 17:21:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.700 17:21:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.700 17:21:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.701 17:21:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.701 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:17:55.695 17:21:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:55.695 17:21:25 -- common/autotest_common.sh@850 -- # return 0 00:17:55.695 17:21:25 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.695 17:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.695 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.695 17:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.695 17:21:25 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:55.695 17:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.695 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 Malloc0 00:17:55.955 17:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.955 17:21:25 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.955 17:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.955 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 17:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.955 17:21:25 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.955 17:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.955 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 17:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.955 17:21:25 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.955 17:21:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.955 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.955 17:21:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.955 17:21:25 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:55.955 17:21:25 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:56.214 Shutting down the fuzz application 00:17:56.214 17:21:26 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:56.473 Shutting down the fuzz application 00:17:56.473 17:21:26 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.473 17:21:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.473 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:17:56.473 17:21:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.473 17:21:26 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:56.473 17:21:26 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:56.473 17:21:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:56.473 17:21:26 -- nvmf/common.sh@117 -- # sync 00:17:56.473 17:21:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.473 17:21:26 -- nvmf/common.sh@120 -- # set +e 00:17:56.474 17:21:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.474 17:21:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.474 rmmod nvme_tcp 00:17:56.474 rmmod nvme_fabrics 00:17:56.737 rmmod nvme_keyring 00:17:56.738 17:21:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.738 17:21:26 -- nvmf/common.sh@124 -- # set -e 00:17:56.738 17:21:26 -- nvmf/common.sh@125 -- # return 0 00:17:56.738 17:21:26 -- nvmf/common.sh@478 -- # '[' -n 83821 ']' 00:17:56.738 17:21:26 -- nvmf/common.sh@479 -- # killprocess 83821 00:17:56.738 17:21:26 -- common/autotest_common.sh@936 -- # '[' -z 83821 ']' 00:17:56.738 17:21:26 -- common/autotest_common.sh@940 -- # kill -0 83821 00:17:56.738 17:21:26 -- common/autotest_common.sh@941 -- # uname 00:17:56.738 17:21:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.738 17:21:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83821 00:17:56.738 17:21:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.738 17:21:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.738 killing process with pid 83821 00:17:56.738 17:21:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83821' 00:17:56.738 17:21:26 -- common/autotest_common.sh@955 -- # kill 83821 00:17:56.738 17:21:26 -- common/autotest_common.sh@960 -- # wait 83821 00:17:56.738 17:21:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:56.738 17:21:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:56.738 17:21:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:56.738 17:21:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.738 17:21:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.738 17:21:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.738 17:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.738 17:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.738 17:21:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:57.002 17:21:26 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:57.002 00:17:57.002 real 0m2.523s 00:17:57.002 user 0m2.661s 00:17:57.002 sys 0m0.555s 00:17:57.002 17:21:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:57.002 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:17:57.002 ************************************ 00:17:57.002 END TEST nvmf_fuzz 00:17:57.002 ************************************ 00:17:57.002 17:21:26 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:57.002 17:21:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:57.002 17:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:57.002 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:17:57.002 ************************************ 00:17:57.002 START TEST nvmf_multiconnection 00:17:57.002 ************************************ 00:17:57.002 17:21:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:57.002 * Looking for test storage... 00:17:57.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:57.002 17:21:26 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.002 17:21:26 -- nvmf/common.sh@7 -- # uname -s 00:17:57.002 17:21:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.002 17:21:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.002 17:21:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.002 17:21:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.002 17:21:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.002 17:21:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.002 17:21:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.002 17:21:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.002 17:21:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.002 17:21:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:57.002 17:21:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:17:57.002 17:21:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.002 17:21:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.002 17:21:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.002 17:21:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.002 17:21:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.002 17:21:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.002 17:21:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.002 17:21:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.002 17:21:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.002 17:21:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.002 17:21:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.002 17:21:26 -- paths/export.sh@5 -- # export PATH 00:17:57.002 17:21:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.002 17:21:26 -- nvmf/common.sh@47 -- # : 0 00:17:57.002 17:21:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.002 17:21:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.002 17:21:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.002 17:21:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.002 17:21:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.002 17:21:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.002 17:21:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.002 17:21:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.002 17:21:26 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.002 17:21:26 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.002 17:21:26 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:57.002 17:21:26 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:57.002 17:21:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:57.002 17:21:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.002 17:21:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:57.002 17:21:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:57.002 17:21:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:57.002 17:21:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.002 17:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.002 17:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.002 17:21:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:57.002 17:21:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:57.002 17:21:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.002 17:21:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.002 17:21:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.002 17:21:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.002 17:21:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.002 17:21:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.002 17:21:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.002 17:21:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.003 17:21:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.003 17:21:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.003 17:21:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.003 17:21:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.003 17:21:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.003 17:21:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.003 Cannot find device "nvmf_tgt_br" 00:17:57.003 17:21:26 -- nvmf/common.sh@155 -- # true 00:17:57.003 17:21:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.261 Cannot find device "nvmf_tgt_br2" 00:17:57.261 17:21:26 -- nvmf/common.sh@156 -- # true 00:17:57.261 17:21:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.261 17:21:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.261 Cannot find device "nvmf_tgt_br" 00:17:57.261 17:21:27 -- nvmf/common.sh@158 -- # true 00:17:57.261 17:21:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:57.261 Cannot find device "nvmf_tgt_br2" 00:17:57.261 17:21:27 -- nvmf/common.sh@159 -- # true 00:17:57.261 17:21:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:57.261 17:21:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:57.261 17:21:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.261 17:21:27 -- nvmf/common.sh@162 -- # true 00:17:57.261 17:21:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.261 17:21:27 -- nvmf/common.sh@163 -- # true 00:17:57.261 17:21:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.261 17:21:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.261 17:21:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.261 17:21:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.261 17:21:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.261 17:21:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.261 17:21:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.261 17:21:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.262 17:21:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.262 17:21:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:57.262 17:21:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:57.262 17:21:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:57.262 17:21:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:57.262 17:21:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.262 17:21:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.262 17:21:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.262 17:21:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:57.262 17:21:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:57.262 17:21:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.521 17:21:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.521 17:21:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.521 17:21:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.521 17:21:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.521 17:21:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:57.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:57.521 00:17:57.521 --- 10.0.0.2 ping statistics --- 00:17:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.521 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:57.521 17:21:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:57.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:57.521 00:17:57.521 --- 10.0.0.3 ping statistics --- 00:17:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.521 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:57.521 17:21:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:57.521 00:17:57.521 --- 10.0.0.1 ping statistics --- 00:17:57.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.521 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:57.521 17:21:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.521 17:21:27 -- nvmf/common.sh@422 -- # return 0 00:17:57.521 17:21:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:57.521 17:21:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.521 17:21:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:57.521 17:21:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:57.521 17:21:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.521 17:21:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:57.521 17:21:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:57.521 17:21:27 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:57.521 17:21:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:57.521 17:21:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:57.521 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.521 17:21:27 -- nvmf/common.sh@470 -- # nvmfpid=84031 00:17:57.521 17:21:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.521 17:21:27 -- nvmf/common.sh@471 -- # waitforlisten 84031 00:17:57.521 17:21:27 -- common/autotest_common.sh@817 -- # '[' -z 84031 ']' 00:17:57.521 17:21:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.521 17:21:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.521 17:21:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.521 17:21:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.521 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:17:57.521 [2024-04-25 17:21:27.368536] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:17:57.521 [2024-04-25 17:21:27.368831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.780 [2024-04-25 17:21:27.508484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.780 [2024-04-25 17:21:27.565881] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.780 [2024-04-25 17:21:27.565928] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.780 [2024-04-25 17:21:27.565956] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.780 [2024-04-25 17:21:27.565964] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.780 [2024-04-25 17:21:27.565972] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.780 [2024-04-25 17:21:27.566101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.780 [2024-04-25 17:21:27.566829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.780 [2024-04-25 17:21:27.566908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.780 [2024-04-25 17:21:27.566912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.718 17:21:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.718 17:21:28 -- common/autotest_common.sh@850 -- # return 0 00:17:58.718 17:21:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:58.718 17:21:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.718 17:21:28 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 [2024-04-25 17:21:28.413159] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@21 -- # seq 1 11 00:17:58.718 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.718 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 Malloc1 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 [2024-04-25 17:21:28.477998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.718 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 Malloc2 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.718 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 Malloc3 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.718 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.718 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:58.718 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.718 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.719 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 Malloc4 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.719 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 Malloc5 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.719 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 Malloc6 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.719 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.719 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:17:58.719 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.719 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 Malloc7 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 Malloc8 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 Malloc9 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 Malloc10 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 Malloc11 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:17:58.979 17:21:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:58.979 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.979 17:21:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:58.979 17:21:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:17:58.979 17:21:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:58.979 17:21:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.238 17:21:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:17:59.238 17:21:29 -- common/autotest_common.sh@1184 -- # local i=0 00:17:59.238 17:21:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.238 17:21:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:59.238 17:21:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:01.141 17:21:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:01.141 17:21:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:01.141 17:21:31 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:18:01.400 17:21:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:01.400 17:21:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.400 17:21:31 -- common/autotest_common.sh@1194 -- # return 0 00:18:01.400 17:21:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.400 17:21:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:01.400 17:21:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:01.400 17:21:31 -- common/autotest_common.sh@1184 -- # local i=0 00:18:01.400 17:21:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.400 17:21:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:01.400 17:21:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:03.964 17:21:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:03.964 17:21:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:03.964 17:21:33 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:18:03.964 17:21:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:03.964 17:21:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.964 17:21:33 -- common/autotest_common.sh@1194 -- # return 0 00:18:03.964 17:21:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:03.964 17:21:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:03.964 17:21:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:03.964 17:21:33 -- common/autotest_common.sh@1184 -- # local i=0 00:18:03.964 17:21:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.964 17:21:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:03.964 17:21:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:05.870 17:21:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:05.870 17:21:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:05.870 17:21:35 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:18:05.870 17:21:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:05.870 17:21:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.870 17:21:35 -- common/autotest_common.sh@1194 -- # return 0 00:18:05.870 17:21:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:05.870 17:21:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:05.870 17:21:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:05.870 17:21:35 -- common/autotest_common.sh@1184 -- # local i=0 00:18:05.870 17:21:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.870 17:21:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:05.870 17:21:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:07.775 17:21:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:07.775 17:21:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:07.775 17:21:37 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:18:07.775 17:21:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:07.775 17:21:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.775 17:21:37 -- common/autotest_common.sh@1194 -- # return 0 00:18:07.775 17:21:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:07.775 17:21:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:08.034 17:21:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:08.034 17:21:37 -- common/autotest_common.sh@1184 -- # local i=0 00:18:08.034 17:21:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.034 17:21:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:08.034 17:21:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:09.937 17:21:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:09.937 17:21:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:09.937 17:21:39 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:18:09.937 17:21:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:09.937 17:21:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.937 17:21:39 -- common/autotest_common.sh@1194 -- # return 0 00:18:09.937 17:21:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:09.937 17:21:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:10.195 17:21:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:10.195 17:21:40 -- common/autotest_common.sh@1184 -- # local i=0 00:18:10.195 17:21:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.195 17:21:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:10.195 17:21:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:12.727 17:21:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:12.727 17:21:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:12.727 17:21:42 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:18:12.727 17:21:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:12.727 17:21:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.727 17:21:42 -- common/autotest_common.sh@1194 -- # return 0 00:18:12.727 17:21:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.727 17:21:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:12.727 17:21:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:12.727 17:21:42 -- common/autotest_common.sh@1184 -- # local i=0 00:18:12.727 17:21:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.727 17:21:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:12.727 17:21:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:14.637 17:21:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:14.637 17:21:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:14.637 17:21:44 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:18:14.637 17:21:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:14.637 17:21:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.637 17:21:44 -- common/autotest_common.sh@1194 -- # return 0 00:18:14.637 17:21:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.637 17:21:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:14.637 17:21:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:14.637 17:21:44 -- common/autotest_common.sh@1184 -- # local i=0 00:18:14.637 17:21:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.637 17:21:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:14.637 17:21:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:16.541 17:21:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:16.541 17:21:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:16.541 17:21:46 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:18:16.541 17:21:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:16.541 17:21:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.541 17:21:46 -- common/autotest_common.sh@1194 -- # return 0 00:18:16.541 17:21:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.541 17:21:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:16.800 17:21:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:16.800 17:21:46 -- common/autotest_common.sh@1184 -- # local i=0 00:18:16.800 17:21:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.801 17:21:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:16.801 17:21:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:19.333 17:21:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:19.333 17:21:48 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:18:19.333 17:21:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:19.333 17:21:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:19.333 17:21:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.333 17:21:48 -- common/autotest_common.sh@1194 -- # return 0 00:18:19.333 17:21:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.333 17:21:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:19.333 17:21:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:19.333 17:21:48 -- common/autotest_common.sh@1184 -- # local i=0 00:18:19.333 17:21:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.333 17:21:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:19.333 17:21:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:21.244 17:21:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:21.244 17:21:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:21.244 17:21:50 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:18:21.244 17:21:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:21.244 17:21:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.244 17:21:50 -- common/autotest_common.sh@1194 -- # return 0 00:18:21.244 17:21:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.244 17:21:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:21.244 17:21:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:21.244 17:21:51 -- common/autotest_common.sh@1184 -- # local i=0 00:18:21.244 17:21:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.244 17:21:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:21.244 17:21:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:23.215 17:21:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:23.215 17:21:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:23.215 17:21:53 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:18:23.215 17:21:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:23.215 17:21:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.215 17:21:53 -- common/autotest_common.sh@1194 -- # return 0 00:18:23.215 17:21:53 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:23.215 [global] 00:18:23.215 thread=1 00:18:23.215 invalidate=1 00:18:23.215 rw=read 00:18:23.215 time_based=1 00:18:23.215 runtime=10 00:18:23.215 ioengine=libaio 00:18:23.215 direct=1 00:18:23.215 bs=262144 00:18:23.215 iodepth=64 00:18:23.215 norandommap=1 00:18:23.215 numjobs=1 00:18:23.215 00:18:23.215 [job0] 00:18:23.215 filename=/dev/nvme0n1 00:18:23.215 [job1] 00:18:23.215 filename=/dev/nvme10n1 00:18:23.215 [job2] 00:18:23.215 filename=/dev/nvme1n1 00:18:23.215 [job3] 00:18:23.215 filename=/dev/nvme2n1 00:18:23.215 [job4] 00:18:23.215 filename=/dev/nvme3n1 00:18:23.215 [job5] 00:18:23.215 filename=/dev/nvme4n1 00:18:23.215 [job6] 00:18:23.215 filename=/dev/nvme5n1 00:18:23.215 [job7] 00:18:23.215 filename=/dev/nvme6n1 00:18:23.215 [job8] 00:18:23.215 filename=/dev/nvme7n1 00:18:23.215 [job9] 00:18:23.215 filename=/dev/nvme8n1 00:18:23.215 [job10] 00:18:23.215 filename=/dev/nvme9n1 00:18:23.474 Could not set queue depth (nvme0n1) 00:18:23.474 Could not set queue depth (nvme10n1) 00:18:23.474 Could not set queue depth (nvme1n1) 00:18:23.474 Could not set queue depth (nvme2n1) 00:18:23.474 Could not set queue depth (nvme3n1) 00:18:23.474 Could not set queue depth (nvme4n1) 00:18:23.474 Could not set queue depth (nvme5n1) 00:18:23.474 Could not set queue depth (nvme6n1) 00:18:23.474 Could not set queue depth (nvme7n1) 00:18:23.474 Could not set queue depth (nvme8n1) 00:18:23.474 Could not set queue depth (nvme9n1) 00:18:23.474 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:23.474 fio-3.35 00:18:23.474 Starting 11 threads 00:18:35.678 00:18:35.678 job0: (groupid=0, jobs=1): err= 0: pid=84508: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=585, BW=146MiB/s (154MB/s)(1479MiB/10099msec) 00:18:35.678 slat (usec): min=21, max=59665, avg=1690.55, stdev=6014.11 00:18:35.678 clat (msec): min=11, max=187, avg=107.46, stdev=19.09 00:18:35.678 lat (msec): min=11, max=220, avg=109.15, stdev=19.99 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 35], 5.00th=[ 67], 10.00th=[ 93], 20.00th=[ 103], 00:18:35.678 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:18:35.678 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 124], 95.00th=[ 128], 00:18:35.678 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:18:35.678 | 99.99th=[ 188] 00:18:35.678 bw ( KiB/s): min=132096, max=235520, per=8.51%, avg=149770.45, stdev=21238.40, samples=20 00:18:35.678 iops : min= 516, max= 920, avg=585.00, stdev=82.96, samples=20 00:18:35.678 lat (msec) : 20=0.36%, 50=2.47%, 100=14.30%, 250=82.87% 00:18:35.678 cpu : usr=0.25%, sys=2.21%, ctx=974, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=5915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job1: (groupid=0, jobs=1): err= 0: pid=84509: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=442, BW=111MiB/s (116MB/s)(1120MiB/10114msec) 00:18:35.678 slat (usec): min=21, max=154916, avg=2228.61, stdev=7822.15 00:18:35.678 clat (msec): min=12, max=264, avg=142.09, stdev=20.08 00:18:35.678 lat (msec): min=13, max=316, avg=144.32, stdev=21.46 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 53], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 132], 00:18:35.678 | 30.00th=[ 138], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 146], 00:18:35.678 | 70.00th=[ 148], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 165], 00:18:35.678 | 99.00th=[ 201], 99.50th=[ 220], 99.90th=[ 266], 99.95th=[ 266], 00:18:35.678 | 99.99th=[ 266] 00:18:35.678 bw ( KiB/s): min=100864, max=128000, per=6.42%, avg=113063.25, stdev=8594.26, samples=20 00:18:35.678 iops : min= 394, max= 500, avg=441.65, stdev=33.57, samples=20 00:18:35.678 lat (msec) : 20=0.29%, 50=0.56%, 100=0.85%, 250=98.15%, 500=0.16% 00:18:35.678 cpu : usr=0.19%, sys=1.59%, ctx=952, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job2: (groupid=0, jobs=1): err= 0: pid=84510: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=552, BW=138MiB/s (145MB/s)(1395MiB/10097msec) 00:18:35.678 slat (usec): min=13, max=105621, avg=1759.39, stdev=6582.59 00:18:35.678 clat (msec): min=19, max=205, avg=113.92, stdev=15.38 00:18:35.678 lat (msec): min=20, max=259, avg=115.68, stdev=16.59 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 73], 5.00th=[ 93], 10.00th=[ 100], 20.00th=[ 106], 00:18:35.678 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 117], 00:18:35.678 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 128], 95.00th=[ 132], 00:18:35.678 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 207], 99.95th=[ 207], 00:18:35.678 | 99.99th=[ 207] 00:18:35.678 bw ( KiB/s): min=128512, max=154624, per=8.02%, avg=141170.50, stdev=6931.59, samples=20 00:18:35.678 iops : min= 502, max= 604, avg=551.40, stdev=27.12, samples=20 00:18:35.678 lat (msec) : 20=0.02%, 50=0.36%, 100=11.01%, 250=88.62% 00:18:35.678 cpu : usr=0.23%, sys=1.87%, ctx=1377, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=5578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job3: (groupid=0, jobs=1): err= 0: pid=84511: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=478, BW=120MiB/s (125MB/s)(1209MiB/10102msec) 00:18:35.678 slat (usec): min=20, max=139454, avg=2049.27, stdev=8635.10 00:18:35.678 clat (usec): min=1232, max=295240, avg=131460.43, stdev=30278.82 00:18:35.678 lat (usec): min=1266, max=295763, avg=133509.70, stdev=31694.02 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 3], 5.00th=[ 111], 10.00th=[ 120], 20.00th=[ 125], 00:18:35.678 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 136], 60.00th=[ 140], 00:18:35.678 | 70.00th=[ 142], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.678 | 99.00th=[ 186], 99.50th=[ 205], 99.90th=[ 257], 99.95th=[ 292], 00:18:35.678 | 99.99th=[ 296] 00:18:35.678 bw ( KiB/s): min=92672, max=184689, per=6.94%, avg=122216.25, stdev=19307.13, samples=20 00:18:35.678 iops : min= 362, max= 721, avg=477.15, stdev=75.43, samples=20 00:18:35.678 lat (msec) : 2=0.10%, 4=1.59%, 10=1.16%, 20=1.51%, 100=0.02% 00:18:35.678 lat (msec) : 250=95.29%, 500=0.33% 00:18:35.678 cpu : usr=0.20%, sys=1.63%, ctx=1254, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=4836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job4: (groupid=0, jobs=1): err= 0: pid=84512: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=555, BW=139MiB/s (146MB/s)(1402MiB/10095msec) 00:18:35.678 slat (usec): min=20, max=63833, avg=1779.93, stdev=6136.19 00:18:35.678 clat (msec): min=26, max=197, avg=113.25, stdev=15.61 00:18:35.678 lat (msec): min=26, max=197, avg=115.03, stdev=16.64 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 52], 5.00th=[ 92], 10.00th=[ 100], 20.00th=[ 106], 00:18:35.678 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 117], 00:18:35.678 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 134], 00:18:35.678 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 186], 00:18:35.678 | 99.99th=[ 199] 00:18:35.678 bw ( KiB/s): min=128000, max=181908, per=8.06%, avg=141882.55, stdev=10901.30, samples=20 00:18:35.678 iops : min= 500, max= 710, avg=554.05, stdev=42.51, samples=20 00:18:35.678 lat (msec) : 50=0.43%, 100=9.88%, 250=89.69% 00:18:35.678 cpu : usr=0.19%, sys=2.19%, ctx=980, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=5608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job5: (groupid=0, jobs=1): err= 0: pid=84513: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=1101, BW=275MiB/s (289MB/s)(2764MiB/10039msec) 00:18:35.678 slat (usec): min=20, max=35289, avg=899.72, stdev=3382.66 00:18:35.678 clat (msec): min=18, max=101, avg=57.14, stdev= 8.63 00:18:35.678 lat (msec): min=18, max=101, avg=58.04, stdev= 8.98 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:18:35.678 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 60], 00:18:35.678 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 71], 00:18:35.678 | 99.00th=[ 77], 99.50th=[ 80], 99.90th=[ 84], 99.95th=[ 102], 00:18:35.678 | 99.99th=[ 103] 00:18:35.678 bw ( KiB/s): min=263680, max=302592, per=15.99%, avg=281477.50, stdev=10300.73, samples=20 00:18:35.678 iops : min= 1030, max= 1182, avg=1099.45, stdev=40.20, samples=20 00:18:35.678 lat (msec) : 20=0.06%, 50=18.50%, 100=81.37%, 250=0.06% 00:18:35.678 cpu : usr=0.34%, sys=3.91%, ctx=2716, majf=0, minf=4097 00:18:35.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:35.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.678 issued rwts: total=11055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.678 job6: (groupid=0, jobs=1): err= 0: pid=84514: Thu Apr 25 17:22:03 2024 00:18:35.678 read: IOPS=539, BW=135MiB/s (141MB/s)(1362MiB/10098msec) 00:18:35.678 slat (usec): min=18, max=117953, avg=1812.46, stdev=6432.69 00:18:35.678 clat (msec): min=18, max=211, avg=116.66, stdev=17.24 00:18:35.678 lat (msec): min=19, max=244, avg=118.47, stdev=18.31 00:18:35.678 clat percentiles (msec): 00:18:35.678 | 1.00th=[ 46], 5.00th=[ 96], 10.00th=[ 103], 20.00th=[ 108], 00:18:35.679 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 120], 00:18:35.679 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 136], 00:18:35.679 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 211], 99.95th=[ 211], 00:18:35.679 | 99.99th=[ 211] 00:18:35.679 bw ( KiB/s): min=127488, max=144896, per=7.83%, avg=137829.35, stdev=5873.36, samples=20 00:18:35.679 iops : min= 498, max= 566, avg=538.35, stdev=22.91, samples=20 00:18:35.679 lat (msec) : 20=0.04%, 50=1.17%, 100=6.85%, 250=91.94% 00:18:35.679 cpu : usr=0.18%, sys=1.94%, ctx=1075, majf=0, minf=4097 00:18:35.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:35.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.679 issued rwts: total=5447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.679 job7: (groupid=0, jobs=1): err= 0: pid=84515: Thu Apr 25 17:22:03 2024 00:18:35.679 read: IOPS=456, BW=114MiB/s (120MB/s)(1154MiB/10104msec) 00:18:35.679 slat (usec): min=20, max=128558, avg=2162.11, stdev=8627.98 00:18:35.679 clat (msec): min=100, max=237, avg=137.77, stdev=13.70 00:18:35.679 lat (msec): min=105, max=278, avg=139.93, stdev=16.01 00:18:35.679 clat percentiles (msec): 00:18:35.679 | 1.00th=[ 110], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 129], 00:18:35.679 | 30.00th=[ 132], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:18:35.679 | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 150], 95.00th=[ 157], 00:18:35.679 | 99.00th=[ 197], 99.50th=[ 220], 99.90th=[ 236], 99.95th=[ 236], 00:18:35.679 | 99.99th=[ 239] 00:18:35.679 bw ( KiB/s): min=95232, max=135680, per=6.62%, avg=116589.65, stdev=9702.53, samples=20 00:18:35.679 iops : min= 372, max= 530, avg=455.25, stdev=37.96, samples=20 00:18:35.679 lat (msec) : 250=100.00% 00:18:35.679 cpu : usr=0.14%, sys=1.59%, ctx=969, majf=0, minf=4097 00:18:35.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:35.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.679 issued rwts: total=4616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.679 job8: (groupid=0, jobs=1): err= 0: pid=84516: Thu Apr 25 17:22:03 2024 00:18:35.679 read: IOPS=1086, BW=272MiB/s (285MB/s)(2730MiB/10047msec) 00:18:35.679 slat (usec): min=20, max=38246, avg=912.25, stdev=3372.13 00:18:35.679 clat (usec): min=17001, max=89614, avg=57880.61, stdev=8965.58 00:18:35.679 lat (usec): min=17170, max=92964, avg=58792.86, stdev=9324.57 00:18:35.679 clat percentiles (usec): 00:18:35.679 | 1.00th=[36963], 5.00th=[43779], 10.00th=[46924], 20.00th=[50070], 00:18:35.679 | 30.00th=[53216], 40.00th=[55837], 50.00th=[58459], 60.00th=[60556], 00:18:35.679 | 70.00th=[63177], 80.00th=[65274], 90.00th=[68682], 95.00th=[71828], 00:18:35.679 | 99.00th=[77071], 99.50th=[79168], 99.90th=[84411], 99.95th=[88605], 00:18:35.679 | 99.99th=[89654] 00:18:35.679 bw ( KiB/s): min=242148, max=300966, per=15.79%, avg=277882.10, stdev=15616.33, samples=20 00:18:35.679 iops : min= 945, max= 1175, avg=1085.40, stdev=61.06, samples=20 00:18:35.679 lat (msec) : 20=0.12%, 50=19.19%, 100=80.69% 00:18:35.679 cpu : usr=0.42%, sys=3.61%, ctx=2506, majf=0, minf=4097 00:18:35.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:35.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.679 issued rwts: total=10919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.679 job9: (groupid=0, jobs=1): err= 0: pid=84517: Thu Apr 25 17:22:03 2024 00:18:35.679 read: IOPS=450, BW=113MiB/s (118MB/s)(1138MiB/10101msec) 00:18:35.679 slat (usec): min=20, max=124396, avg=2194.13, stdev=8725.78 00:18:35.679 clat (msec): min=92, max=242, avg=139.68, stdev=14.72 00:18:35.679 lat (msec): min=103, max=314, avg=141.87, stdev=16.94 00:18:35.679 clat percentiles (msec): 00:18:35.679 | 1.00th=[ 110], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 129], 00:18:35.679 | 30.00th=[ 133], 40.00th=[ 136], 50.00th=[ 140], 60.00th=[ 142], 00:18:35.679 | 70.00th=[ 146], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 163], 00:18:35.679 | 99.00th=[ 192], 99.50th=[ 207], 99.90th=[ 243], 99.95th=[ 243], 00:18:35.679 | 99.99th=[ 243] 00:18:35.679 bw ( KiB/s): min=85162, max=138240, per=6.53%, avg=114896.55, stdev=11196.56, samples=20 00:18:35.679 iops : min= 332, max= 540, avg=448.50, stdev=43.78, samples=20 00:18:35.679 lat (msec) : 100=0.02%, 250=99.98% 00:18:35.679 cpu : usr=0.18%, sys=1.67%, ctx=939, majf=0, minf=4097 00:18:35.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:35.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.679 issued rwts: total=4550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.679 job10: (groupid=0, jobs=1): err= 0: pid=84518: Thu Apr 25 17:22:03 2024 00:18:35.679 read: IOPS=645, BW=161MiB/s (169MB/s)(1632MiB/10113msec) 00:18:35.679 slat (usec): min=12, max=118568, avg=1526.54, stdev=6129.21 00:18:35.679 clat (msec): min=9, max=256, avg=97.47, stdev=57.17 00:18:35.679 lat (msec): min=9, max=256, avg=99.00, stdev=58.25 00:18:35.679 clat percentiles (msec): 00:18:35.679 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 27], 20.00th=[ 31], 00:18:35.679 | 30.00th=[ 35], 40.00th=[ 47], 50.00th=[ 133], 60.00th=[ 138], 00:18:35.679 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 153], 95.00th=[ 157], 00:18:35.679 | 99.00th=[ 201], 99.50th=[ 228], 99.90th=[ 241], 99.95th=[ 257], 00:18:35.679 | 99.99th=[ 257] 00:18:35.679 bw ( KiB/s): min=105772, max=539648, per=9.40%, avg=165467.80, stdev=130949.92, samples=20 00:18:35.679 iops : min= 413, max= 2108, avg=646.35, stdev=511.53, samples=20 00:18:35.679 lat (msec) : 10=0.06%, 20=1.76%, 50=38.74%, 100=0.46%, 250=58.92% 00:18:35.679 lat (msec) : 500=0.06% 00:18:35.679 cpu : usr=0.23%, sys=2.15%, ctx=1422, majf=0, minf=4097 00:18:35.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:35.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:35.679 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:35.679 00:18:35.679 Run status group 0 (all jobs): 00:18:35.679 READ: bw=1719MiB/s (1802MB/s), 111MiB/s-275MiB/s (116MB/s-289MB/s), io=17.0GiB (18.2GB), run=10039-10114msec 00:18:35.679 00:18:35.679 Disk stats (read/write): 00:18:35.679 nvme0n1: ios=11705/0, merge=0/0, ticks=1241954/0, in_queue=1241954, util=97.95% 00:18:35.679 nvme10n1: ios=8857/0, merge=0/0, ticks=1242488/0, in_queue=1242488, util=98.00% 00:18:35.679 nvme1n1: ios=11033/0, merge=0/0, ticks=1241640/0, in_queue=1241640, util=98.02% 00:18:35.679 nvme2n1: ios=9545/0, merge=0/0, ticks=1236398/0, in_queue=1236398, util=98.04% 00:18:35.679 nvme3n1: ios=11089/0, merge=0/0, ticks=1242315/0, in_queue=1242315, util=98.16% 00:18:35.679 nvme4n1: ios=22021/0, merge=0/0, ticks=1236257/0, in_queue=1236257, util=98.44% 00:18:35.679 nvme5n1: ios=10812/0, merge=0/0, ticks=1244213/0, in_queue=1244213, util=98.63% 00:18:35.679 nvme6n1: ios=9129/0, merge=0/0, ticks=1242119/0, in_queue=1242119, util=98.65% 00:18:35.679 nvme7n1: ios=21711/0, merge=0/0, ticks=1234447/0, in_queue=1234447, util=98.76% 00:18:35.679 nvme8n1: ios=8972/0, merge=0/0, ticks=1239619/0, in_queue=1239619, util=98.86% 00:18:35.679 nvme9n1: ios=12938/0, merge=0/0, ticks=1236208/0, in_queue=1236208, util=99.10% 00:18:35.679 17:22:03 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:35.679 [global] 00:18:35.679 thread=1 00:18:35.679 invalidate=1 00:18:35.679 rw=randwrite 00:18:35.679 time_based=1 00:18:35.679 runtime=10 00:18:35.679 ioengine=libaio 00:18:35.679 direct=1 00:18:35.679 bs=262144 00:18:35.679 iodepth=64 00:18:35.679 norandommap=1 00:18:35.679 numjobs=1 00:18:35.679 00:18:35.679 [job0] 00:18:35.679 filename=/dev/nvme0n1 00:18:35.679 [job1] 00:18:35.679 filename=/dev/nvme10n1 00:18:35.679 [job2] 00:18:35.679 filename=/dev/nvme1n1 00:18:35.679 [job3] 00:18:35.679 filename=/dev/nvme2n1 00:18:35.679 [job4] 00:18:35.679 filename=/dev/nvme3n1 00:18:35.679 [job5] 00:18:35.679 filename=/dev/nvme4n1 00:18:35.679 [job6] 00:18:35.679 filename=/dev/nvme5n1 00:18:35.679 [job7] 00:18:35.679 filename=/dev/nvme6n1 00:18:35.679 [job8] 00:18:35.679 filename=/dev/nvme7n1 00:18:35.679 [job9] 00:18:35.679 filename=/dev/nvme8n1 00:18:35.679 [job10] 00:18:35.679 filename=/dev/nvme9n1 00:18:35.679 Could not set queue depth (nvme0n1) 00:18:35.679 Could not set queue depth (nvme10n1) 00:18:35.679 Could not set queue depth (nvme1n1) 00:18:35.679 Could not set queue depth (nvme2n1) 00:18:35.679 Could not set queue depth (nvme3n1) 00:18:35.679 Could not set queue depth (nvme4n1) 00:18:35.679 Could not set queue depth (nvme5n1) 00:18:35.679 Could not set queue depth (nvme6n1) 00:18:35.679 Could not set queue depth (nvme7n1) 00:18:35.679 Could not set queue depth (nvme8n1) 00:18:35.679 Could not set queue depth (nvme9n1) 00:18:35.679 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:35.679 fio-3.35 00:18:35.679 Starting 11 threads 00:18:45.660 00:18:45.661 job0: (groupid=0, jobs=1): err= 0: pid=84715: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=363, BW=90.9MiB/s (95.3MB/s)(923MiB/10159msec); 0 zone resets 00:18:45.661 slat (usec): min=16, max=14553, avg=2687.23, stdev=4783.40 00:18:45.661 clat (msec): min=13, max=339, avg=173.35, stdev=39.48 00:18:45.661 lat (msec): min=13, max=339, avg=176.04, stdev=39.84 00:18:45.661 clat percentiles (msec): 00:18:45.661 | 1.00th=[ 48], 5.00th=[ 79], 10.00th=[ 83], 20.00th=[ 178], 00:18:45.661 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 188], 60.00th=[ 190], 00:18:45.661 | 70.00th=[ 190], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 192], 00:18:45.661 | 99.00th=[ 234], 99.50th=[ 279], 99.90th=[ 330], 99.95th=[ 338], 00:18:45.661 | 99.99th=[ 338] 00:18:45.661 bw ( KiB/s): min=83968, max=196608, per=5.78%, avg=92876.80, stdev=24827.01, samples=20 00:18:45.661 iops : min= 328, max= 768, avg=362.80, stdev=96.98, samples=20 00:18:45.661 lat (msec) : 20=0.22%, 50=0.81%, 100=11.40%, 250=86.76%, 500=0.81% 00:18:45.661 cpu : usr=0.66%, sys=1.14%, ctx=4568, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,3692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job1: (groupid=0, jobs=1): err= 0: pid=84720: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=409, BW=102MiB/s (107MB/s)(1036MiB/10132msec); 0 zone resets 00:18:45.661 slat (usec): min=17, max=88208, avg=2407.15, stdev=4358.90 00:18:45.661 clat (msec): min=90, max=281, avg=154.02, stdev=13.85 00:18:45.661 lat (msec): min=90, max=281, avg=156.43, stdev=13.34 00:18:45.661 clat percentiles (msec): 00:18:45.661 | 1.00th=[ 142], 5.00th=[ 144], 10.00th=[ 144], 20.00th=[ 146], 00:18:45.661 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:18:45.661 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 167], 00:18:45.661 | 99.00th=[ 215], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 271], 00:18:45.661 | 99.99th=[ 284] 00:18:45.661 bw ( KiB/s): min=71680, max=108544, per=6.50%, avg=104462.95, stdev=7860.08, samples=20 00:18:45.661 iops : min= 280, max= 424, avg=408.05, stdev=30.70, samples=20 00:18:45.661 lat (msec) : 100=0.19%, 250=99.47%, 500=0.34% 00:18:45.661 cpu : usr=0.86%, sys=1.39%, ctx=6594, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,4144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job2: (groupid=0, jobs=1): err= 0: pid=84733: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=1441, BW=360MiB/s (378MB/s)(3619MiB/10039msec); 0 zone resets 00:18:45.661 slat (usec): min=17, max=11958, avg=686.78, stdev=1160.27 00:18:45.661 clat (usec): min=14452, max=86543, avg=43687.85, stdev=6501.91 00:18:45.661 lat (usec): min=14499, max=86603, avg=44374.62, stdev=6553.15 00:18:45.661 clat percentiles (usec): 00:18:45.661 | 1.00th=[40109], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:45.661 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42730], 60.00th=[43254], 00:18:45.661 | 70.00th=[43779], 80.00th=[43779], 90.00th=[44303], 95.00th=[44827], 00:18:45.661 | 99.00th=[82314], 99.50th=[82314], 99.90th=[83362], 99.95th=[85459], 00:18:45.661 | 99.99th=[86508] 00:18:45.661 bw ( KiB/s): min=194949, max=383488, per=22.95%, avg=368966.65, stdev=41210.15, samples=20 00:18:45.661 iops : min= 761, max= 1498, avg=1441.25, stdev=161.09, samples=20 00:18:45.661 lat (msec) : 20=0.06%, 50=96.46%, 100=3.48% 00:18:45.661 cpu : usr=1.99%, sys=2.94%, ctx=18925, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,14475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job3: (groupid=0, jobs=1): err= 0: pid=84734: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=411, BW=103MiB/s (108MB/s)(1043MiB/10140msec); 0 zone resets 00:18:45.661 slat (usec): min=21, max=42804, avg=2391.57, stdev=4210.48 00:18:45.661 clat (msec): min=28, max=284, avg=153.05, stdev=16.17 00:18:45.661 lat (msec): min=28, max=284, avg=155.44, stdev=15.84 00:18:45.661 clat percentiles (msec): 00:18:45.661 | 1.00th=[ 96], 5.00th=[ 144], 10.00th=[ 144], 20.00th=[ 146], 00:18:45.661 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:18:45.661 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 176], 00:18:45.661 | 99.00th=[ 209], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 275], 00:18:45.661 | 99.99th=[ 284] 00:18:45.661 bw ( KiB/s): min=85844, max=108544, per=6.54%, avg=105160.30, stdev=4827.10, samples=20 00:18:45.661 iops : min= 335, max= 424, avg=410.75, stdev=18.92, samples=20 00:18:45.661 lat (msec) : 50=0.29%, 100=0.84%, 250=98.54%, 500=0.34% 00:18:45.661 cpu : usr=0.87%, sys=1.32%, ctx=2247, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,4173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job4: (groupid=0, jobs=1): err= 0: pid=84735: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=342, BW=85.6MiB/s (89.7MB/s)(870MiB/10161msec); 0 zone resets 00:18:45.661 slat (usec): min=16, max=105027, avg=2819.69, stdev=5261.18 00:18:45.661 clat (msec): min=4, max=345, avg=184.07, stdev=26.59 00:18:45.661 lat (msec): min=4, max=345, avg=186.89, stdev=26.59 00:18:45.661 clat percentiles (msec): 00:18:45.661 | 1.00th=[ 52], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 180], 00:18:45.661 | 30.00th=[ 188], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:18:45.661 | 70.00th=[ 190], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 192], 00:18:45.661 | 99.00th=[ 241], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 347], 00:18:45.661 | 99.99th=[ 347] 00:18:45.661 bw ( KiB/s): min=86016, max=109056, per=5.44%, avg=87415.20, stdev=5129.43, samples=20 00:18:45.661 iops : min= 336, max= 426, avg=341.45, stdev=20.04, samples=20 00:18:45.661 lat (msec) : 10=0.29%, 20=0.12%, 50=0.58%, 100=1.58%, 250=96.46% 00:18:45.661 lat (msec) : 500=0.98% 00:18:45.661 cpu : usr=0.57%, sys=0.74%, ctx=4980, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,3478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job5: (groupid=0, jobs=1): err= 0: pid=84736: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=340, BW=85.2MiB/s (89.3MB/s)(865MiB/10153msec); 0 zone resets 00:18:45.661 slat (usec): min=19, max=34524, avg=2887.02, stdev=5050.74 00:18:45.661 clat (msec): min=15, max=344, avg=184.89, stdev=27.24 00:18:45.661 lat (msec): min=15, max=344, avg=187.78, stdev=27.22 00:18:45.661 clat percentiles (msec): 00:18:45.661 | 1.00th=[ 77], 5.00th=[ 118], 10.00th=[ 178], 20.00th=[ 180], 00:18:45.661 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 192], 00:18:45.661 | 70.00th=[ 194], 80.00th=[ 197], 90.00th=[ 201], 95.00th=[ 205], 00:18:45.661 | 99.00th=[ 230], 99.50th=[ 279], 99.90th=[ 334], 99.95th=[ 334], 00:18:45.661 | 99.99th=[ 347] 00:18:45.661 bw ( KiB/s): min=79872, max=133365, per=5.40%, avg=86881.45, stdev=11089.55, samples=20 00:18:45.661 iops : min= 312, max= 520, avg=339.30, stdev=43.11, samples=20 00:18:45.661 lat (msec) : 20=0.03%, 50=0.58%, 100=0.81%, 250=97.75%, 500=0.84% 00:18:45.661 cpu : usr=0.63%, sys=1.05%, ctx=4195, majf=0, minf=1 00:18:45.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:45.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.661 issued rwts: total=0,3459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.661 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.661 job6: (groupid=0, jobs=1): err= 0: pid=84737: Thu Apr 25 17:22:14 2024 00:18:45.661 write: IOPS=1499, BW=375MiB/s (393MB/s)(3764MiB/10038msec); 0 zone resets 00:18:45.661 slat (usec): min=17, max=5230, avg=659.81, stdev=1088.12 00:18:45.662 clat (usec): min=4435, max=78448, avg=41990.46, stdev=2286.06 00:18:45.662 lat (usec): min=4465, max=78508, avg=42650.28, stdev=2115.11 00:18:45.662 clat percentiles (usec): 00:18:45.662 | 1.00th=[39060], 5.00th=[39584], 10.00th=[40109], 20.00th=[40633], 00:18:45.662 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:18:45.662 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43779], 00:18:45.662 | 99.00th=[44303], 99.50th=[44827], 99.90th=[67634], 99.95th=[72877], 00:18:45.662 | 99.99th=[78119] 00:18:45.662 bw ( KiB/s): min=374272, max=391168, per=23.86%, avg=383743.70, stdev=3429.43, samples=20 00:18:45.662 iops : min= 1462, max= 1528, avg=1498.90, stdev=13.35, samples=20 00:18:45.662 lat (msec) : 10=0.05%, 20=0.11%, 50=99.56%, 100=0.28% 00:18:45.662 cpu : usr=2.27%, sys=3.12%, ctx=17542, majf=0, minf=1 00:18:45.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.662 issued rwts: total=0,15056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.662 job7: (groupid=0, jobs=1): err= 0: pid=84738: Thu Apr 25 17:22:14 2024 00:18:45.662 write: IOPS=347, BW=86.9MiB/s (91.2MB/s)(883MiB/10157msec); 0 zone resets 00:18:45.662 slat (usec): min=17, max=16078, avg=2827.47, stdev=4918.68 00:18:45.662 clat (msec): min=16, max=344, avg=181.11, stdev=26.79 00:18:45.662 lat (msec): min=16, max=344, avg=183.94, stdev=26.75 00:18:45.662 clat percentiles (msec): 00:18:45.662 | 1.00th=[ 70], 5.00th=[ 118], 10.00th=[ 176], 20.00th=[ 178], 00:18:45.662 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:18:45.662 | 70.00th=[ 190], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 192], 00:18:45.662 | 99.00th=[ 239], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 347], 00:18:45.662 | 99.99th=[ 347] 00:18:45.662 bw ( KiB/s): min=83968, max=135438, per=5.52%, avg=88819.90, stdev=11018.00, samples=20 00:18:45.662 iops : min= 328, max= 529, avg=346.95, stdev=43.03, samples=20 00:18:45.662 lat (msec) : 20=0.11%, 50=0.57%, 100=0.91%, 250=97.57%, 500=0.85% 00:18:45.662 cpu : usr=0.54%, sys=0.79%, ctx=4773, majf=0, minf=1 00:18:45.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.662 issued rwts: total=0,3532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.662 job8: (groupid=0, jobs=1): err= 0: pid=84739: Thu Apr 25 17:22:14 2024 00:18:45.662 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(877MiB/10163msec); 0 zone resets 00:18:45.662 slat (usec): min=20, max=18058, avg=2846.60, stdev=4947.12 00:18:45.662 clat (msec): min=17, max=341, avg=182.47, stdev=27.00 00:18:45.662 lat (msec): min=17, max=341, avg=185.32, stdev=26.97 00:18:45.662 clat percentiles (msec): 00:18:45.662 | 1.00th=[ 77], 5.00th=[ 118], 10.00th=[ 176], 20.00th=[ 180], 00:18:45.662 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 190], 00:18:45.662 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 197], 00:18:45.662 | 99.00th=[ 236], 99.50th=[ 296], 99.90th=[ 330], 99.95th=[ 342], 00:18:45.662 | 99.99th=[ 342] 00:18:45.662 bw ( KiB/s): min=83968, max=134656, per=5.48%, avg=88166.40, stdev=11009.54, samples=20 00:18:45.662 iops : min= 328, max= 526, avg=344.40, stdev=43.01, samples=20 00:18:45.662 lat (msec) : 20=0.09%, 50=0.57%, 100=0.80%, 250=97.69%, 500=0.86% 00:18:45.662 cpu : usr=0.65%, sys=1.31%, ctx=4002, majf=0, minf=1 00:18:45.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.662 issued rwts: total=0,3508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.662 job9: (groupid=0, jobs=1): err= 0: pid=84740: Thu Apr 25 17:22:14 2024 00:18:45.662 write: IOPS=411, BW=103MiB/s (108MB/s)(1042MiB/10137msec); 0 zone resets 00:18:45.662 slat (usec): min=17, max=52733, avg=2396.19, stdev=4251.68 00:18:45.662 clat (msec): min=54, max=285, avg=153.21, stdev=14.44 00:18:45.662 lat (msec): min=54, max=285, avg=155.60, stdev=13.99 00:18:45.662 clat percentiles (msec): 00:18:45.662 | 1.00th=[ 125], 5.00th=[ 144], 10.00th=[ 144], 20.00th=[ 146], 00:18:45.662 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:18:45.662 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 163], 00:18:45.662 | 99.00th=[ 209], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 275], 00:18:45.662 | 99.99th=[ 284] 00:18:45.662 bw ( KiB/s): min=80032, max=108544, per=6.53%, avg=105074.50, stdev=6020.82, samples=20 00:18:45.662 iops : min= 312, max= 424, avg=410.40, stdev=23.65, samples=20 00:18:45.662 lat (msec) : 100=0.67%, 250=98.99%, 500=0.34% 00:18:45.662 cpu : usr=0.71%, sys=0.81%, ctx=4611, majf=0, minf=1 00:18:45.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.662 issued rwts: total=0,4168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.662 job10: (groupid=0, jobs=1): err= 0: pid=84741: Thu Apr 25 17:22:14 2024 00:18:45.662 write: IOPS=409, BW=102MiB/s (107MB/s)(1038MiB/10137msec); 0 zone resets 00:18:45.662 slat (usec): min=20, max=88200, avg=2402.25, stdev=4338.88 00:18:45.662 clat (msec): min=17, max=288, avg=153.75, stdev=16.16 00:18:45.662 lat (msec): min=17, max=288, avg=156.16, stdev=15.76 00:18:45.662 clat percentiles (msec): 00:18:45.662 | 1.00th=[ 142], 5.00th=[ 144], 10.00th=[ 144], 20.00th=[ 146], 00:18:45.662 | 30.00th=[ 153], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 155], 00:18:45.662 | 70.00th=[ 155], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 176], 00:18:45.662 | 99.00th=[ 213], 99.50th=[ 241], 99.90th=[ 279], 99.95th=[ 279], 00:18:45.662 | 99.99th=[ 288] 00:18:45.662 bw ( KiB/s): min=81920, max=108544, per=6.51%, avg=104682.65, stdev=5866.74, samples=20 00:18:45.662 iops : min= 320, max= 424, avg=408.90, stdev=22.91, samples=20 00:18:45.662 lat (msec) : 20=0.10%, 50=0.39%, 250=99.08%, 500=0.43% 00:18:45.662 cpu : usr=0.88%, sys=1.19%, ctx=5134, majf=0, minf=1 00:18:45.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:45.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:45.662 issued rwts: total=0,4153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.662 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:45.662 00:18:45.662 Run status group 0 (all jobs): 00:18:45.662 WRITE: bw=1570MiB/s (1647MB/s), 85.2MiB/s-375MiB/s (89.3MB/s-393MB/s), io=15.6GiB (16.7GB), run=10038-10163msec 00:18:45.662 00:18:45.662 Disk stats (read/write): 00:18:45.662 nvme0n1: ios=49/7250, merge=0/0, ticks=86/1211348, in_queue=1211434, util=98.06% 00:18:45.662 nvme10n1: ios=49/8145, merge=0/0, ticks=61/1211555, in_queue=1211616, util=97.91% 00:18:45.662 nvme1n1: ios=36/28794, merge=0/0, ticks=30/1218581, in_queue=1218611, util=98.12% 00:18:45.662 nvme2n1: ios=13/8205, merge=0/0, ticks=13/1211961, in_queue=1211974, util=98.03% 00:18:45.662 nvme3n1: ios=0/6829, merge=0/0, ticks=0/1212178, in_queue=1212178, util=98.20% 00:18:45.662 nvme4n1: ios=5/6780, merge=0/0, ticks=11/1209399, in_queue=1209410, util=98.30% 00:18:45.662 nvme5n1: ios=0/29933, merge=0/0, ticks=0/1217870, in_queue=1217870, util=98.40% 00:18:45.662 nvme6n1: ios=0/6935, merge=0/0, ticks=0/1211551, in_queue=1211551, util=98.50% 00:18:45.662 nvme7n1: ios=0/6883, merge=0/0, ticks=0/1211173, in_queue=1211173, util=98.81% 00:18:45.662 nvme8n1: ios=0/8197, merge=0/0, ticks=0/1212178, in_queue=1212178, util=98.84% 00:18:45.662 nvme9n1: ios=0/8171, merge=0/0, ticks=0/1212168, in_queue=1212168, util=98.94% 00:18:45.662 17:22:14 -- target/multiconnection.sh@36 -- # sync 00:18:45.662 17:22:14 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:45.662 17:22:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.662 17:22:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.662 17:22:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:45.662 17:22:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.662 17:22:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.662 17:22:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:45.662 17:22:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:18:45.662 17:22:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.662 17:22:14 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.662 17:22:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.662 17:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.662 17:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:45.662 17:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.662 17:22:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.662 17:22:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:45.663 17:22:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:45.663 17:22:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:18:45.663 17:22:14 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:45.663 17:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:45.663 17:22:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:45.663 17:22:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:45.663 17:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:45.663 17:22:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:45.663 17:22:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:18:45.663 17:22:14 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:14 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:45.663 17:22:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:14 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:45.663 17:22:14 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:45.663 17:22:14 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:45.663 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:45.663 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:45.663 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:45.663 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:45.663 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:45.663 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:45.663 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:45.663 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.663 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.663 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:45.663 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:45.663 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:45.663 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:18:45.663 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.663 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.663 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:45.663 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.663 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.663 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.664 17:22:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.664 17:22:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:45.664 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:45.664 17:22:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:45.664 17:22:15 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.664 17:22:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.664 17:22:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:45.664 17:22:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.664 17:22:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:18:45.664 17:22:15 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.664 17:22:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:45.664 17:22:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.664 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.664 17:22:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.664 17:22:15 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:45.664 17:22:15 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:45.664 17:22:15 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:45.664 17:22:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:45.664 17:22:15 -- nvmf/common.sh@117 -- # sync 00:18:45.664 17:22:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.664 17:22:15 -- nvmf/common.sh@120 -- # set +e 00:18:45.664 17:22:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.664 17:22:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.664 rmmod nvme_tcp 00:18:45.664 rmmod nvme_fabrics 00:18:45.664 rmmod nvme_keyring 00:18:45.664 17:22:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.664 17:22:15 -- nvmf/common.sh@124 -- # set -e 00:18:45.664 17:22:15 -- nvmf/common.sh@125 -- # return 0 00:18:45.664 17:22:15 -- nvmf/common.sh@478 -- # '[' -n 84031 ']' 00:18:45.664 17:22:15 -- nvmf/common.sh@479 -- # killprocess 84031 00:18:45.664 17:22:15 -- common/autotest_common.sh@936 -- # '[' -z 84031 ']' 00:18:45.664 17:22:15 -- common/autotest_common.sh@940 -- # kill -0 84031 00:18:45.664 17:22:15 -- common/autotest_common.sh@941 -- # uname 00:18:45.664 17:22:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:45.664 17:22:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84031 00:18:45.664 killing process with pid 84031 00:18:45.664 17:22:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:45.664 17:22:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:45.664 17:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84031' 00:18:45.664 17:22:15 -- common/autotest_common.sh@955 -- # kill 84031 00:18:45.664 17:22:15 -- common/autotest_common.sh@960 -- # wait 84031 00:18:45.923 17:22:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:45.923 17:22:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:45.923 17:22:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:45.923 17:22:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.923 17:22:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.923 17:22:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.923 17:22:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.923 17:22:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.923 17:22:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:45.923 ************************************ 00:18:45.923 END TEST nvmf_multiconnection 00:18:45.923 ************************************ 00:18:45.923 00:18:45.923 real 0m49.055s 00:18:45.923 user 2m43.247s 00:18:45.923 sys 0m26.328s 00:18:45.923 17:22:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:45.923 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:46.181 17:22:15 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.181 17:22:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:46.181 17:22:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.181 17:22:15 -- common/autotest_common.sh@10 -- # set +x 00:18:46.181 ************************************ 00:18:46.181 START TEST nvmf_initiator_timeout 00:18:46.181 ************************************ 00:18:46.181 17:22:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:46.181 * Looking for test storage... 00:18:46.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:46.181 17:22:16 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.181 17:22:16 -- nvmf/common.sh@7 -- # uname -s 00:18:46.181 17:22:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.181 17:22:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.181 17:22:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.181 17:22:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.181 17:22:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.181 17:22:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.181 17:22:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.181 17:22:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.181 17:22:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.181 17:22:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.181 17:22:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:18:46.181 17:22:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:18:46.181 17:22:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.181 17:22:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.181 17:22:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.181 17:22:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.181 17:22:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.181 17:22:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.181 17:22:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.181 17:22:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.182 17:22:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.182 17:22:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.182 17:22:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.182 17:22:16 -- paths/export.sh@5 -- # export PATH 00:18:46.182 17:22:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.182 17:22:16 -- nvmf/common.sh@47 -- # : 0 00:18:46.182 17:22:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.182 17:22:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.182 17:22:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.182 17:22:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.182 17:22:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.182 17:22:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.182 17:22:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.182 17:22:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.182 17:22:16 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.182 17:22:16 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.182 17:22:16 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:46.182 17:22:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.182 17:22:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.182 17:22:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.182 17:22:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.182 17:22:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.182 17:22:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.182 17:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.182 17:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.182 17:22:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:46.182 17:22:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:46.182 17:22:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:46.182 17:22:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:46.182 17:22:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:46.182 17:22:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:46.182 17:22:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.182 17:22:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.182 17:22:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:46.182 17:22:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:46.182 17:22:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.182 17:22:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.182 17:22:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.182 17:22:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.182 17:22:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.182 17:22:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.182 17:22:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.182 17:22:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.182 17:22:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:46.182 17:22:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:46.182 Cannot find device "nvmf_tgt_br" 00:18:46.182 17:22:16 -- nvmf/common.sh@155 -- # true 00:18:46.182 17:22:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.182 Cannot find device "nvmf_tgt_br2" 00:18:46.182 17:22:16 -- nvmf/common.sh@156 -- # true 00:18:46.182 17:22:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:46.182 17:22:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:46.440 Cannot find device "nvmf_tgt_br" 00:18:46.441 17:22:16 -- nvmf/common.sh@158 -- # true 00:18:46.441 17:22:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:46.441 Cannot find device "nvmf_tgt_br2" 00:18:46.441 17:22:16 -- nvmf/common.sh@159 -- # true 00:18:46.441 17:22:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:46.441 17:22:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:46.441 17:22:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.441 17:22:16 -- nvmf/common.sh@162 -- # true 00:18:46.441 17:22:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.441 17:22:16 -- nvmf/common.sh@163 -- # true 00:18:46.441 17:22:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.441 17:22:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.441 17:22:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.441 17:22:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.441 17:22:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.441 17:22:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.441 17:22:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.441 17:22:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.441 17:22:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.441 17:22:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:46.441 17:22:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:46.441 17:22:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:46.441 17:22:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:46.441 17:22:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.441 17:22:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.441 17:22:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.441 17:22:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:46.441 17:22:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:46.441 17:22:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.441 17:22:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.441 17:22:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.441 17:22:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.441 17:22:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.441 17:22:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:46.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:46.441 00:18:46.441 --- 10.0.0.2 ping statistics --- 00:18:46.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.441 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:46.441 17:22:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:46.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:46.699 00:18:46.699 --- 10.0.0.3 ping statistics --- 00:18:46.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.699 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:46.699 17:22:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:46.699 00:18:46.699 --- 10.0.0.1 ping statistics --- 00:18:46.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.699 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:46.699 17:22:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.699 17:22:16 -- nvmf/common.sh@422 -- # return 0 00:18:46.699 17:22:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:46.699 17:22:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.699 17:22:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:46.699 17:22:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:46.699 17:22:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.699 17:22:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:46.699 17:22:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:46.699 17:22:16 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:46.699 17:22:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:46.699 17:22:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.699 17:22:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.699 17:22:16 -- nvmf/common.sh@470 -- # nvmfpid=85106 00:18:46.699 17:22:16 -- nvmf/common.sh@471 -- # waitforlisten 85106 00:18:46.700 17:22:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.700 17:22:16 -- common/autotest_common.sh@817 -- # '[' -z 85106 ']' 00:18:46.700 17:22:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.700 17:22:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.700 17:22:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.700 17:22:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.700 17:22:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.700 [2024-04-25 17:22:16.502954] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:18:46.700 [2024-04-25 17:22:16.503227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.700 [2024-04-25 17:22:16.630008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.959 [2024-04-25 17:22:16.684106] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.959 [2024-04-25 17:22:16.684385] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.959 [2024-04-25 17:22:16.684574] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.959 [2024-04-25 17:22:16.684781] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.959 [2024-04-25 17:22:16.684901] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.959 [2024-04-25 17:22:16.685054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.959 [2024-04-25 17:22:16.685186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.959 [2024-04-25 17:22:16.685317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.959 [2024-04-25 17:22:16.685320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.526 17:22:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.527 17:22:17 -- common/autotest_common.sh@850 -- # return 0 00:18:47.527 17:22:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:47.527 17:22:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.527 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.527 17:22:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.527 17:22:17 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:47.527 17:22:17 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.527 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.527 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.785 Malloc0 00:18:47.785 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.785 17:22:17 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:47.785 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.785 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.785 Delay0 00:18:47.786 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.786 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.786 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.786 [2024-04-25 17:22:17.527433] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.786 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:47.786 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.786 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.786 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:47.786 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.786 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.786 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.786 17:22:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.786 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.786 [2024-04-25 17:22:17.563612] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.786 17:22:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.786 17:22:17 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.786 17:22:17 -- common/autotest_common.sh@1184 -- # local i=0 00:18:47.786 17:22:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.786 17:22:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:47.786 17:22:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:50.320 17:22:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:50.320 17:22:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:50.320 17:22:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:50.320 17:22:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:50.320 17:22:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.320 17:22:19 -- common/autotest_common.sh@1194 -- # return 0 00:18:50.320 17:22:19 -- target/initiator_timeout.sh@35 -- # fio_pid=85188 00:18:50.320 17:22:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:50.320 17:22:19 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:50.320 [global] 00:18:50.320 thread=1 00:18:50.320 invalidate=1 00:18:50.320 rw=write 00:18:50.320 time_based=1 00:18:50.320 runtime=60 00:18:50.320 ioengine=libaio 00:18:50.320 direct=1 00:18:50.320 bs=4096 00:18:50.320 iodepth=1 00:18:50.320 norandommap=0 00:18:50.320 numjobs=1 00:18:50.320 00:18:50.320 verify_dump=1 00:18:50.320 verify_backlog=512 00:18:50.320 verify_state_save=0 00:18:50.320 do_verify=1 00:18:50.320 verify=crc32c-intel 00:18:50.320 [job0] 00:18:50.320 filename=/dev/nvme0n1 00:18:50.320 Could not set queue depth (nvme0n1) 00:18:50.320 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:50.320 fio-3.35 00:18:50.320 Starting 1 thread 00:18:52.869 17:22:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:52.869 17:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.869 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.869 true 00:18:52.869 17:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.869 17:22:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:52.869 17:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.869 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.869 true 00:18:52.869 17:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.869 17:22:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:52.869 17:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.869 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.869 true 00:18:52.869 17:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.869 17:22:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:52.869 17:22:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.869 17:22:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.869 true 00:18:52.869 17:22:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.869 17:22:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:56.218 17:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 17:22:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 true 00:18:56.218 17:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:56.218 17:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 17:22:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 true 00:18:56.218 17:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:56.218 17:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 17:22:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 true 00:18:56.218 17:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:56.218 17:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.218 17:22:25 -- common/autotest_common.sh@10 -- # set +x 00:18:56.218 true 00:18:56.218 17:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:56.218 17:22:25 -- target/initiator_timeout.sh@54 -- # wait 85188 00:19:52.480 00:19:52.480 job0: (groupid=0, jobs=1): err= 0: pid=85209: Thu Apr 25 17:23:20 2024 00:19:52.480 read: IOPS=870, BW=3482KiB/s (3566kB/s)(204MiB/60000msec) 00:19:52.480 slat (usec): min=11, max=13910, avg=14.98, stdev=72.12 00:19:52.480 clat (usec): min=154, max=40793k, avg=964.71, stdev=178485.41 00:19:52.480 lat (usec): min=166, max=40793k, avg=979.68, stdev=178485.42 00:19:52.480 clat percentiles (usec): 00:19:52.480 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:19:52.480 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:19:52.480 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 217], 00:19:52.480 | 99.00th=[ 237], 99.50th=[ 249], 99.90th=[ 302], 99.95th=[ 396], 00:19:52.480 | 99.99th=[ 537] 00:19:52.480 write: IOPS=878, BW=3516KiB/s (3600kB/s)(206MiB/60000msec); 0 zone resets 00:19:52.480 slat (usec): min=14, max=716, avg=21.31, stdev= 7.41 00:19:52.480 clat (usec): min=3, max=1689, avg=142.82, stdev=18.82 00:19:52.480 lat (usec): min=137, max=1709, avg=164.13, stdev=20.56 00:19:52.480 clat percentiles (usec): 00:19:52.480 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:19:52.480 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:19:52.480 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:19:52.480 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 260], 99.95th=[ 314], 00:19:52.480 | 99.99th=[ 537] 00:19:52.480 bw ( KiB/s): min= 5928, max=12288, per=100.00%, avg=10808.29, stdev=1332.34, samples=38 00:19:52.480 iops : min= 1482, max= 3072, avg=2702.05, stdev=333.07, samples=38 00:19:52.480 lat (usec) : 4=0.01%, 100=0.01%, 250=99.73%, 500=0.25%, 750=0.01% 00:19:52.480 lat (msec) : 2=0.01%, >=2000=0.01% 00:19:52.480 cpu : usr=0.59%, sys=2.30%, ctx=105001, majf=0, minf=2 00:19:52.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.480 issued rwts: total=52236,52736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.480 00:19:52.480 Run status group 0 (all jobs): 00:19:52.481 READ: bw=3482KiB/s (3566kB/s), 3482KiB/s-3482KiB/s (3566kB/s-3566kB/s), io=204MiB (214MB), run=60000-60000msec 00:19:52.481 WRITE: bw=3516KiB/s (3600kB/s), 3516KiB/s-3516KiB/s (3600kB/s-3600kB/s), io=206MiB (216MB), run=60000-60000msec 00:19:52.481 00:19:52.481 Disk stats (read/write): 00:19:52.481 nvme0n1: ios=52474/52224, merge=0/0, ticks=10194/8284, in_queue=18478, util=99.66% 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.481 17:23:20 -- common/autotest_common.sh@1205 -- # local i=0 00:19:52.481 17:23:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:52.481 17:23:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.481 17:23:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.481 17:23:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:52.481 17:23:20 -- common/autotest_common.sh@1217 -- # return 0 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:52.481 nvmf hotplug test: fio successful as expected 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.481 17:23:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.481 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:52.481 17:23:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:52.481 17:23:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:52.481 17:23:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:52.481 17:23:20 -- nvmf/common.sh@117 -- # sync 00:19:52.481 17:23:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@120 -- # set +e 00:19:52.481 17:23:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.481 17:23:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.481 rmmod nvme_tcp 00:19:52.481 rmmod nvme_fabrics 00:19:52.481 rmmod nvme_keyring 00:19:52.481 17:23:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.481 17:23:20 -- nvmf/common.sh@124 -- # set -e 00:19:52.481 17:23:20 -- nvmf/common.sh@125 -- # return 0 00:19:52.481 17:23:20 -- nvmf/common.sh@478 -- # '[' -n 85106 ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@479 -- # killprocess 85106 00:19:52.481 17:23:20 -- common/autotest_common.sh@936 -- # '[' -z 85106 ']' 00:19:52.481 17:23:20 -- common/autotest_common.sh@940 -- # kill -0 85106 00:19:52.481 17:23:20 -- common/autotest_common.sh@941 -- # uname 00:19:52.481 17:23:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.481 17:23:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85106 00:19:52.481 killing process with pid 85106 00:19:52.481 17:23:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.481 17:23:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.481 17:23:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85106' 00:19:52.481 17:23:20 -- common/autotest_common.sh@955 -- # kill 85106 00:19:52.481 17:23:20 -- common/autotest_common.sh@960 -- # wait 85106 00:19:52.481 17:23:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:52.481 17:23:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:52.481 17:23:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.481 17:23:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.481 17:23:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.481 17:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.481 17:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.481 17:23:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:52.481 ************************************ 00:19:52.481 END TEST nvmf_initiator_timeout 00:19:52.481 ************************************ 00:19:52.481 00:19:52.481 real 1m4.445s 00:19:52.481 user 4m3.391s 00:19:52.481 sys 0m11.231s 00:19:52.481 17:23:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.481 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:52.481 17:23:20 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:19:52.481 17:23:20 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:19:52.481 17:23:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.481 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:52.481 17:23:20 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:19:52.481 17:23:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.481 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:52.481 17:23:20 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:19:52.481 17:23:20 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:52.481 17:23:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:52.481 17:23:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.481 17:23:20 -- common/autotest_common.sh@10 -- # set +x 00:19:52.481 ************************************ 00:19:52.481 START TEST nvmf_multicontroller 00:19:52.481 ************************************ 00:19:52.481 17:23:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:52.481 * Looking for test storage... 00:19:52.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:52.481 17:23:20 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.481 17:23:20 -- nvmf/common.sh@7 -- # uname -s 00:19:52.481 17:23:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.481 17:23:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.481 17:23:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.481 17:23:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.481 17:23:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.481 17:23:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.481 17:23:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.481 17:23:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.481 17:23:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.481 17:23:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.481 17:23:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:52.481 17:23:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:52.481 17:23:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.481 17:23:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.481 17:23:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.481 17:23:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.481 17:23:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.481 17:23:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.481 17:23:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.481 17:23:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.481 17:23:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.481 17:23:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.481 17:23:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.481 17:23:20 -- paths/export.sh@5 -- # export PATH 00:19:52.481 17:23:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.481 17:23:20 -- nvmf/common.sh@47 -- # : 0 00:19:52.481 17:23:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.481 17:23:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.481 17:23:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.481 17:23:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.481 17:23:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.481 17:23:20 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.481 17:23:20 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.481 17:23:20 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:52.481 17:23:20 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:52.481 17:23:20 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.481 17:23:20 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:52.481 17:23:20 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:52.481 17:23:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.481 17:23:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.481 17:23:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.481 17:23:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.481 17:23:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.481 17:23:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.481 17:23:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.481 17:23:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.481 17:23:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:52.481 17:23:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:52.482 17:23:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.482 17:23:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.482 17:23:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.482 17:23:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:52.482 17:23:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.482 17:23:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.482 17:23:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.482 17:23:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.482 17:23:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.482 17:23:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.482 17:23:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.482 17:23:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.482 17:23:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:52.482 17:23:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:52.482 Cannot find device "nvmf_tgt_br" 00:19:52.482 17:23:20 -- nvmf/common.sh@155 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.482 Cannot find device "nvmf_tgt_br2" 00:19:52.482 17:23:20 -- nvmf/common.sh@156 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:52.482 17:23:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:52.482 Cannot find device "nvmf_tgt_br" 00:19:52.482 17:23:20 -- nvmf/common.sh@158 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:52.482 Cannot find device "nvmf_tgt_br2" 00:19:52.482 17:23:20 -- nvmf/common.sh@159 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:52.482 17:23:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:52.482 17:23:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.482 17:23:20 -- nvmf/common.sh@162 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.482 17:23:20 -- nvmf/common.sh@163 -- # true 00:19:52.482 17:23:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.482 17:23:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.482 17:23:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.482 17:23:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.482 17:23:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.482 17:23:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.482 17:23:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.482 17:23:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.482 17:23:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.482 17:23:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.482 17:23:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.482 17:23:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.482 17:23:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.482 17:23:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.482 17:23:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.482 17:23:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.482 17:23:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.482 17:23:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.482 17:23:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.482 17:23:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.482 17:23:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.482 17:23:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.482 17:23:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.482 17:23:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:52.482 00:19:52.482 --- 10.0.0.2 ping statistics --- 00:19:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.482 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:52.482 17:23:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:19:52.482 00:19:52.482 --- 10.0.0.3 ping statistics --- 00:19:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.482 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:52.482 17:23:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:52.482 00:19:52.482 --- 10.0.0.1 ping statistics --- 00:19:52.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.482 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:52.482 17:23:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.482 17:23:20 -- nvmf/common.sh@422 -- # return 0 00:19:52.482 17:23:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.482 17:23:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.482 17:23:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:52.482 17:23:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.482 17:23:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:52.482 17:23:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:52.482 17:23:21 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:52.482 17:23:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.482 17:23:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.482 17:23:21 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 17:23:21 -- nvmf/common.sh@470 -- # nvmfpid=86039 00:19:52.482 17:23:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:52.482 17:23:21 -- nvmf/common.sh@471 -- # waitforlisten 86039 00:19:52.482 17:23:21 -- common/autotest_common.sh@817 -- # '[' -z 86039 ']' 00:19:52.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.482 17:23:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.482 17:23:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.482 17:23:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.482 17:23:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.482 17:23:21 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 [2024-04-25 17:23:21.081141] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:52.482 [2024-04-25 17:23:21.081225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.482 [2024-04-25 17:23:21.213170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.482 [2024-04-25 17:23:21.259313] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.482 [2024-04-25 17:23:21.259364] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.482 [2024-04-25 17:23:21.259374] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.482 [2024-04-25 17:23:21.259380] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.482 [2024-04-25 17:23:21.259386] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.482 [2024-04-25 17:23:21.259543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.482 [2024-04-25 17:23:21.260228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.482 [2024-04-25 17:23:21.260259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.482 17:23:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.482 17:23:21 -- common/autotest_common.sh@850 -- # return 0 00:19:52.482 17:23:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:52.482 17:23:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:52.482 17:23:21 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 17:23:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.482 17:23:22 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.482 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.482 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 [2024-04-25 17:23:22.036547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.482 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.482 17:23:22 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:52.482 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.482 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 Malloc0 00:19:52.482 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.482 17:23:22 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.482 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.482 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.482 17:23:22 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.482 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.482 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.482 17:23:22 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.482 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.482 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.482 [2024-04-25 17:23:22.096901] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.482 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 [2024-04-25 17:23:22.104822] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 Malloc1 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:52.483 17:23:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.483 17:23:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.483 17:23:22 -- host/multicontroller.sh@44 -- # bdevperf_pid=86091 00:19:52.483 17:23:22 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.483 17:23:22 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:52.483 17:23:22 -- host/multicontroller.sh@47 -- # waitforlisten 86091 /var/tmp/bdevperf.sock 00:19:52.483 17:23:22 -- common/autotest_common.sh@817 -- # '[' -z 86091 ']' 00:19:52.483 17:23:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.483 17:23:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.483 17:23:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.483 17:23:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.483 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:19:53.420 17:23:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.420 17:23:23 -- common/autotest_common.sh@850 -- # return 0 00:19:53.420 17:23:23 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 NVMe0n1 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.421 17:23:23 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.421 17:23:23 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.421 1 00:19:53.421 17:23:23 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:53.421 17:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.421 17:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:53.421 17:23:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 2024/04/25 17:23:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:53.421 request: 00:19:53.421 { 00:19:53.421 "method": "bdev_nvme_attach_controller", 00:19:53.421 "params": { 00:19:53.421 "name": "NVMe0", 00:19:53.421 "trtype": "tcp", 00:19:53.421 "traddr": "10.0.0.2", 00:19:53.421 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:53.421 "hostaddr": "10.0.0.2", 00:19:53.421 "hostsvcid": "60000", 00:19:53.421 "adrfam": "ipv4", 00:19:53.421 "trsvcid": "4420", 00:19:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:53.421 } 00:19:53.421 } 00:19:53.421 Got JSON-RPC error response 00:19:53.421 GoRPCClient: error on JSON-RPC call 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # es=1 00:19:53.421 17:23:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.421 17:23:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.421 17:23:23 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:53.421 17:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.421 17:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:53.421 17:23:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 2024/04/25 17:23:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:53.421 request: 00:19:53.421 { 00:19:53.421 "method": "bdev_nvme_attach_controller", 00:19:53.421 "params": { 00:19:53.421 "name": "NVMe0", 00:19:53.421 "trtype": "tcp", 00:19:53.421 "traddr": "10.0.0.2", 00:19:53.421 "hostaddr": "10.0.0.2", 00:19:53.421 "hostsvcid": "60000", 00:19:53.421 "adrfam": "ipv4", 00:19:53.421 "trsvcid": "4420", 00:19:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:53.421 } 00:19:53.421 } 00:19:53.421 Got JSON-RPC error response 00:19:53.421 GoRPCClient: error on JSON-RPC call 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # es=1 00:19:53.421 17:23:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.421 17:23:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.421 17:23:23 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.421 17:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 2024/04/25 17:23:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:53.421 request: 00:19:53.421 { 00:19:53.421 "method": "bdev_nvme_attach_controller", 00:19:53.421 "params": { 00:19:53.421 "name": "NVMe0", 00:19:53.421 "trtype": "tcp", 00:19:53.421 "traddr": "10.0.0.2", 00:19:53.421 "hostaddr": "10.0.0.2", 00:19:53.421 "hostsvcid": "60000", 00:19:53.421 "adrfam": "ipv4", 00:19:53.421 "trsvcid": "4420", 00:19:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.421 "multipath": "disable" 00:19:53.421 } 00:19:53.421 } 00:19:53.421 Got JSON-RPC error response 00:19:53.421 GoRPCClient: error on JSON-RPC call 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # es=1 00:19:53.421 17:23:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.421 17:23:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.421 17:23:23 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:53.421 17:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:19:53.421 17:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:53.421 17:23:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:53.421 17:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.421 2024/04/25 17:23:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:53.421 request: 00:19:53.421 { 00:19:53.421 "method": "bdev_nvme_attach_controller", 00:19:53.421 "params": { 00:19:53.421 "name": "NVMe0", 00:19:53.421 "trtype": "tcp", 00:19:53.421 "traddr": "10.0.0.2", 00:19:53.421 "hostaddr": "10.0.0.2", 00:19:53.421 "hostsvcid": "60000", 00:19:53.421 "adrfam": "ipv4", 00:19:53.421 "trsvcid": "4420", 00:19:53.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.421 "multipath": "failover" 00:19:53.421 } 00:19:53.421 } 00:19:53.421 Got JSON-RPC error response 00:19:53.421 GoRPCClient: error on JSON-RPC call 00:19:53.421 17:23:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@641 -- # es=1 00:19:53.421 17:23:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:53.421 17:23:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:53.421 17:23:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:53.421 17:23:23 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:53.421 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.421 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 00:19:53.680 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.680 17:23:23 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:53.680 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.680 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.680 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.681 17:23:23 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:53.681 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.681 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.681 00:19:53.681 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.681 17:23:23 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.681 17:23:23 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:53.681 17:23:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.681 17:23:23 -- common/autotest_common.sh@10 -- # set +x 00:19:53.681 17:23:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.681 17:23:23 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:53.681 17:23:23 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.061 0 00:19:55.061 17:23:24 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:55.061 17:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.061 17:23:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.061 17:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.061 17:23:24 -- host/multicontroller.sh@100 -- # killprocess 86091 00:19:55.061 17:23:24 -- common/autotest_common.sh@936 -- # '[' -z 86091 ']' 00:19:55.061 17:23:24 -- common/autotest_common.sh@940 -- # kill -0 86091 00:19:55.061 17:23:24 -- common/autotest_common.sh@941 -- # uname 00:19:55.061 17:23:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.061 17:23:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86091 00:19:55.061 killing process with pid 86091 00:19:55.061 17:23:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:55.061 17:23:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:55.061 17:23:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86091' 00:19:55.061 17:23:24 -- common/autotest_common.sh@955 -- # kill 86091 00:19:55.061 17:23:24 -- common/autotest_common.sh@960 -- # wait 86091 00:19:55.061 17:23:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.061 17:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.061 17:23:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.061 17:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.061 17:23:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:55.061 17:23:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.061 17:23:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.061 17:23:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.061 17:23:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:55.061 17:23:24 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.062 17:23:24 -- common/autotest_common.sh@1598 -- # read -r file 00:19:55.062 17:23:24 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:55.062 17:23:24 -- common/autotest_common.sh@1597 -- # sort -u 00:19:55.062 17:23:24 -- common/autotest_common.sh@1599 -- # cat 00:19:55.062 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.062 [2024-04-25 17:23:22.216402] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:55.062 [2024-04-25 17:23:22.216511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86091 ] 00:19:55.062 [2024-04-25 17:23:22.356466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.062 [2024-04-25 17:23:22.424362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.062 [2024-04-25 17:23:23.494077] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 8a403cdb-dfd1-41d9-8eff-1a8992b5855d already exists 00:19:55.062 [2024-04-25 17:23:23.494125] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:8a403cdb-dfd1-41d9-8eff-1a8992b5855d alias for bdev NVMe1n1 00:19:55.062 [2024-04-25 17:23:23.494159] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:55.062 Running I/O for 1 seconds... 00:19:55.062 00:19:55.062 Latency(us) 00:19:55.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.062 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:55.062 NVMe0n1 : 1.01 21387.77 83.55 0.00 0.00 5976.39 1742.66 13047.62 00:19:55.062 =================================================================================================================== 00:19:55.062 Total : 21387.77 83.55 0.00 0.00 5976.39 1742.66 13047.62 00:19:55.062 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.062 00:19:55.062 Latency(us) 00:19:55.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.062 =================================================================================================================== 00:19:55.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.062 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:55.062 17:23:24 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:55.062 17:23:24 -- common/autotest_common.sh@1598 -- # read -r file 00:19:55.062 17:23:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:55.062 17:23:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:55.062 17:23:24 -- nvmf/common.sh@117 -- # sync 00:19:55.062 17:23:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.062 17:23:24 -- nvmf/common.sh@120 -- # set +e 00:19:55.062 17:23:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.062 17:23:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.062 rmmod nvme_tcp 00:19:55.062 rmmod nvme_fabrics 00:19:55.062 rmmod nvme_keyring 00:19:55.062 17:23:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.062 17:23:25 -- nvmf/common.sh@124 -- # set -e 00:19:55.062 17:23:25 -- nvmf/common.sh@125 -- # return 0 00:19:55.062 17:23:25 -- nvmf/common.sh@478 -- # '[' -n 86039 ']' 00:19:55.062 17:23:25 -- nvmf/common.sh@479 -- # killprocess 86039 00:19:55.062 17:23:25 -- common/autotest_common.sh@936 -- # '[' -z 86039 ']' 00:19:55.062 17:23:25 -- common/autotest_common.sh@940 -- # kill -0 86039 00:19:55.062 17:23:25 -- common/autotest_common.sh@941 -- # uname 00:19:55.062 17:23:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.062 17:23:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86039 00:19:55.321 killing process with pid 86039 00:19:55.321 17:23:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:55.321 17:23:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:55.321 17:23:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86039' 00:19:55.321 17:23:25 -- common/autotest_common.sh@955 -- # kill 86039 00:19:55.321 17:23:25 -- common/autotest_common.sh@960 -- # wait 86039 00:19:55.321 17:23:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:55.321 17:23:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:55.321 17:23:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:55.321 17:23:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.321 17:23:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.321 17:23:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.321 17:23:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.321 17:23:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.321 17:23:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:55.321 00:19:55.321 real 0m4.695s 00:19:55.321 user 0m15.087s 00:19:55.321 sys 0m0.940s 00:19:55.321 17:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.321 ************************************ 00:19:55.321 END TEST nvmf_multicontroller 00:19:55.321 ************************************ 00:19:55.321 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.581 17:23:25 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:55.581 17:23:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:55.581 17:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.581 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.581 ************************************ 00:19:55.581 START TEST nvmf_aer 00:19:55.581 ************************************ 00:19:55.581 17:23:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:55.581 * Looking for test storage... 00:19:55.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:55.581 17:23:25 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.581 17:23:25 -- nvmf/common.sh@7 -- # uname -s 00:19:55.581 17:23:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.581 17:23:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.581 17:23:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.581 17:23:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.581 17:23:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.581 17:23:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.581 17:23:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.581 17:23:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.581 17:23:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.581 17:23:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:55.581 17:23:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:55.581 17:23:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.581 17:23:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.581 17:23:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.581 17:23:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.581 17:23:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.581 17:23:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.581 17:23:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.581 17:23:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.581 17:23:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.581 17:23:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.581 17:23:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.581 17:23:25 -- paths/export.sh@5 -- # export PATH 00:19:55.581 17:23:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.581 17:23:25 -- nvmf/common.sh@47 -- # : 0 00:19:55.581 17:23:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.581 17:23:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.581 17:23:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.581 17:23:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.581 17:23:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.581 17:23:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.581 17:23:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.581 17:23:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.581 17:23:25 -- host/aer.sh@11 -- # nvmftestinit 00:19:55.581 17:23:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:55.581 17:23:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.581 17:23:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:55.581 17:23:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:55.581 17:23:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:55.581 17:23:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.581 17:23:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.581 17:23:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.581 17:23:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:55.581 17:23:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:55.581 17:23:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.581 17:23:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.581 17:23:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:55.581 17:23:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:55.581 17:23:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.581 17:23:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.581 17:23:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.581 17:23:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.581 17:23:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.581 17:23:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.581 17:23:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.581 17:23:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.581 17:23:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:55.581 17:23:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:55.581 Cannot find device "nvmf_tgt_br" 00:19:55.581 17:23:25 -- nvmf/common.sh@155 -- # true 00:19:55.581 17:23:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.841 Cannot find device "nvmf_tgt_br2" 00:19:55.841 17:23:25 -- nvmf/common.sh@156 -- # true 00:19:55.841 17:23:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:55.841 17:23:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:55.841 Cannot find device "nvmf_tgt_br" 00:19:55.841 17:23:25 -- nvmf/common.sh@158 -- # true 00:19:55.841 17:23:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:55.841 Cannot find device "nvmf_tgt_br2" 00:19:55.841 17:23:25 -- nvmf/common.sh@159 -- # true 00:19:55.841 17:23:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:55.841 17:23:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:55.841 17:23:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.841 17:23:25 -- nvmf/common.sh@162 -- # true 00:19:55.841 17:23:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.841 17:23:25 -- nvmf/common.sh@163 -- # true 00:19:55.841 17:23:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.841 17:23:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.841 17:23:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.841 17:23:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.841 17:23:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.841 17:23:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.841 17:23:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.841 17:23:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.841 17:23:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:55.841 17:23:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:55.841 17:23:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:55.841 17:23:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:55.841 17:23:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:55.841 17:23:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.841 17:23:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.841 17:23:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.841 17:23:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:55.841 17:23:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:55.841 17:23:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.841 17:23:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:56.100 17:23:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:56.100 17:23:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:56.100 17:23:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:56.100 17:23:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:56.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:56.100 00:19:56.100 --- 10.0.0.2 ping statistics --- 00:19:56.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.100 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:56.100 17:23:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:56.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:56.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:56.100 00:19:56.100 --- 10.0.0.3 ping statistics --- 00:19:56.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.100 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:56.100 17:23:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:56.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:56.100 00:19:56.100 --- 10.0.0.1 ping statistics --- 00:19:56.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.100 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:56.100 17:23:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.100 17:23:25 -- nvmf/common.sh@422 -- # return 0 00:19:56.100 17:23:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:56.100 17:23:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.100 17:23:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:56.100 17:23:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:56.100 17:23:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.100 17:23:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:56.100 17:23:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:56.100 17:23:25 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:56.100 17:23:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:56.101 17:23:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:56.101 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:56.101 17:23:25 -- nvmf/common.sh@470 -- # nvmfpid=86350 00:19:56.101 17:23:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:56.101 17:23:25 -- nvmf/common.sh@471 -- # waitforlisten 86350 00:19:56.101 17:23:25 -- common/autotest_common.sh@817 -- # '[' -z 86350 ']' 00:19:56.101 17:23:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.101 17:23:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:56.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.101 17:23:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.101 17:23:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:56.101 17:23:25 -- common/autotest_common.sh@10 -- # set +x 00:19:56.101 [2024-04-25 17:23:25.949659] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:56.101 [2024-04-25 17:23:25.949805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.360 [2024-04-25 17:23:26.088040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.360 [2024-04-25 17:23:26.137397] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.360 [2024-04-25 17:23:26.137455] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.360 [2024-04-25 17:23:26.137465] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.360 [2024-04-25 17:23:26.137471] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.360 [2024-04-25 17:23:26.137476] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.360 [2024-04-25 17:23:26.137616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.360 [2024-04-25 17:23:26.137909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.360 [2024-04-25 17:23:26.138311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.360 [2024-04-25 17:23:26.138341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.928 17:23:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:56.928 17:23:26 -- common/autotest_common.sh@850 -- # return 0 00:19:56.928 17:23:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:56.928 17:23:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.928 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.928 17:23:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.928 17:23:26 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.928 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.928 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 [2024-04-25 17:23:26.909515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:57.188 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.188 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 Malloc0 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:57.188 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.188 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.188 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.188 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.188 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.188 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 [2024-04-25 17:23:26.972130] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:57.188 17:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.188 17:23:26 -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 [2024-04-25 17:23:26.979911] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:57.188 [ 00:19:57.188 { 00:19:57.188 "allow_any_host": true, 00:19:57.188 "hosts": [], 00:19:57.188 "listen_addresses": [], 00:19:57.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.188 "subtype": "Discovery" 00:19:57.188 }, 00:19:57.188 { 00:19:57.188 "allow_any_host": true, 00:19:57.188 "hosts": [], 00:19:57.188 "listen_addresses": [ 00:19:57.188 { 00:19:57.188 "adrfam": "IPv4", 00:19:57.188 "traddr": "10.0.0.2", 00:19:57.188 "transport": "TCP", 00:19:57.188 "trsvcid": "4420", 00:19:57.188 "trtype": "TCP" 00:19:57.188 } 00:19:57.188 ], 00:19:57.188 "max_cntlid": 65519, 00:19:57.188 "max_namespaces": 2, 00:19:57.188 "min_cntlid": 1, 00:19:57.188 "model_number": "SPDK bdev Controller", 00:19:57.188 "namespaces": [ 00:19:57.188 { 00:19:57.188 "bdev_name": "Malloc0", 00:19:57.188 "name": "Malloc0", 00:19:57.188 "nguid": "A82F302A2871422E82C11380D70C7114", 00:19:57.188 "nsid": 1, 00:19:57.188 "uuid": "a82f302a-2871-422e-82c1-1380d70c7114" 00:19:57.188 } 00:19:57.188 ], 00:19:57.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.188 "serial_number": "SPDK00000000000001", 00:19:57.188 "subtype": "NVMe" 00:19:57.188 } 00:19:57.188 ] 00:19:57.188 17:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.188 17:23:26 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:57.188 17:23:26 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:57.188 17:23:26 -- host/aer.sh@33 -- # aerpid=86404 00:19:57.188 17:23:26 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:57.188 17:23:26 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:57.188 17:23:26 -- common/autotest_common.sh@1251 -- # local i=0 00:19:57.188 17:23:26 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.188 17:23:27 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:19:57.188 17:23:27 -- common/autotest_common.sh@1254 -- # i=1 00:19:57.188 17:23:27 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:57.188 17:23:27 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.188 17:23:27 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:19:57.188 17:23:27 -- common/autotest_common.sh@1254 -- # i=2 00:19:57.188 17:23:27 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:57.448 17:23:27 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.448 17:23:27 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:57.448 17:23:27 -- common/autotest_common.sh@1262 -- # return 0 00:19:57.448 17:23:27 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:57.448 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.448 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.448 Malloc1 00:19:57.448 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.448 17:23:27 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:57.448 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.448 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.448 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.448 17:23:27 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:57.448 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.448 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.448 [ 00:19:57.448 { 00:19:57.448 "allow_any_host": true, 00:19:57.448 "hosts": [], 00:19:57.448 "listen_addresses": [], 00:19:57.448 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.448 "subtype": "Discovery" 00:19:57.448 }, 00:19:57.448 { 00:19:57.448 "allow_any_host": true, 00:19:57.448 "hosts": [], 00:19:57.448 "listen_addresses": [ 00:19:57.448 { 00:19:57.448 "adrfam": "IPv4", 00:19:57.448 "traddr": "10.0.0.2", 00:19:57.448 "transport": "TCP", 00:19:57.448 "trsvcid": "4420", 00:19:57.448 "trtype": "TCP" 00:19:57.448 } 00:19:57.448 ], 00:19:57.448 "max_cntlid": 65519, 00:19:57.448 "max_namespaces": 2, 00:19:57.448 "min_cntlid": 1, 00:19:57.448 Asynchronous Event Request test 00:19:57.448 Attaching to 10.0.0.2 00:19:57.448 Attached to 10.0.0.2 00:19:57.448 Registering asynchronous event callbacks... 00:19:57.448 Starting namespace attribute notice tests for all controllers... 00:19:57.448 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:57.448 aer_cb - Changed Namespace 00:19:57.448 Cleaning up... 00:19:57.448 "model_number": "SPDK bdev Controller", 00:19:57.448 "namespaces": [ 00:19:57.448 { 00:19:57.448 "bdev_name": "Malloc0", 00:19:57.448 "name": "Malloc0", 00:19:57.448 "nguid": "A82F302A2871422E82C11380D70C7114", 00:19:57.448 "nsid": 1, 00:19:57.448 "uuid": "a82f302a-2871-422e-82c1-1380d70c7114" 00:19:57.448 }, 00:19:57.448 { 00:19:57.448 "bdev_name": "Malloc1", 00:19:57.448 "name": "Malloc1", 00:19:57.448 "nguid": "DF1D38559DA44E958A4FA70D94335E79", 00:19:57.448 "nsid": 2, 00:19:57.448 "uuid": "df1d3855-9da4-4e95-8a4f-a70d94335e79" 00:19:57.448 } 00:19:57.448 ], 00:19:57.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.448 "serial_number": "SPDK00000000000001", 00:19:57.448 "subtype": "NVMe" 00:19:57.448 } 00:19:57.448 ] 00:19:57.448 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.448 17:23:27 -- host/aer.sh@43 -- # wait 86404 00:19:57.448 17:23:27 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:57.448 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.448 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.448 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.448 17:23:27 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:57.448 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.449 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.449 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.449 17:23:27 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.449 17:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.449 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.449 17:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.449 17:23:27 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:57.449 17:23:27 -- host/aer.sh@51 -- # nvmftestfini 00:19:57.449 17:23:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:57.449 17:23:27 -- nvmf/common.sh@117 -- # sync 00:19:57.449 17:23:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.449 17:23:27 -- nvmf/common.sh@120 -- # set +e 00:19:57.449 17:23:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.449 17:23:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.449 rmmod nvme_tcp 00:19:57.449 rmmod nvme_fabrics 00:19:57.449 rmmod nvme_keyring 00:19:57.449 17:23:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.449 17:23:27 -- nvmf/common.sh@124 -- # set -e 00:19:57.449 17:23:27 -- nvmf/common.sh@125 -- # return 0 00:19:57.449 17:23:27 -- nvmf/common.sh@478 -- # '[' -n 86350 ']' 00:19:57.449 17:23:27 -- nvmf/common.sh@479 -- # killprocess 86350 00:19:57.449 17:23:27 -- common/autotest_common.sh@936 -- # '[' -z 86350 ']' 00:19:57.449 17:23:27 -- common/autotest_common.sh@940 -- # kill -0 86350 00:19:57.449 17:23:27 -- common/autotest_common.sh@941 -- # uname 00:19:57.449 17:23:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.449 17:23:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86350 00:19:57.708 killing process with pid 86350 00:19:57.708 17:23:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:57.708 17:23:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:57.708 17:23:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86350' 00:19:57.708 17:23:27 -- common/autotest_common.sh@955 -- # kill 86350 00:19:57.708 [2024-04-25 17:23:27.445413] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:57.708 17:23:27 -- common/autotest_common.sh@960 -- # wait 86350 00:19:57.708 17:23:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:57.708 17:23:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:57.708 17:23:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:57.708 17:23:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.708 17:23:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.708 17:23:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.708 17:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.709 17:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.709 17:23:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.709 ************************************ 00:19:57.709 END TEST nvmf_aer 00:19:57.709 ************************************ 00:19:57.709 00:19:57.709 real 0m2.249s 00:19:57.709 user 0m6.069s 00:19:57.709 sys 0m0.577s 00:19:57.709 17:23:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.709 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.969 17:23:27 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:57.969 17:23:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.969 17:23:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.969 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:19:57.969 ************************************ 00:19:57.969 START TEST nvmf_async_init 00:19:57.969 ************************************ 00:19:57.969 17:23:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:57.969 * Looking for test storage... 00:19:57.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.969 17:23:27 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.969 17:23:27 -- nvmf/common.sh@7 -- # uname -s 00:19:57.969 17:23:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.969 17:23:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.969 17:23:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.969 17:23:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.969 17:23:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.969 17:23:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.969 17:23:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.969 17:23:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.969 17:23:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.969 17:23:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:57.969 17:23:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:19:57.969 17:23:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.969 17:23:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.969 17:23:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.969 17:23:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.969 17:23:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.969 17:23:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.969 17:23:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.969 17:23:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.969 17:23:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.969 17:23:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.969 17:23:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.969 17:23:27 -- paths/export.sh@5 -- # export PATH 00:19:57.969 17:23:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.969 17:23:27 -- nvmf/common.sh@47 -- # : 0 00:19:57.969 17:23:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.969 17:23:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.969 17:23:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.969 17:23:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.969 17:23:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.969 17:23:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.969 17:23:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.969 17:23:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.969 17:23:27 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:57.969 17:23:27 -- host/async_init.sh@14 -- # null_block_size=512 00:19:57.969 17:23:27 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:57.969 17:23:27 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:57.969 17:23:27 -- host/async_init.sh@20 -- # uuidgen 00:19:57.969 17:23:27 -- host/async_init.sh@20 -- # tr -d - 00:19:57.969 17:23:27 -- host/async_init.sh@20 -- # nguid=e6a95cac665c409ea31c6db5182e51ec 00:19:57.969 17:23:27 -- host/async_init.sh@22 -- # nvmftestinit 00:19:57.969 17:23:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:57.969 17:23:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.969 17:23:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:57.969 17:23:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:57.969 17:23:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:57.969 17:23:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.969 17:23:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.969 17:23:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.969 17:23:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:57.969 17:23:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:57.969 17:23:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.969 17:23:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.969 17:23:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.969 17:23:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.969 17:23:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.969 17:23:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.969 17:23:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.969 17:23:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.969 17:23:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.969 17:23:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.969 17:23:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.969 17:23:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.969 17:23:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.969 17:23:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.969 Cannot find device "nvmf_tgt_br" 00:19:57.969 17:23:27 -- nvmf/common.sh@155 -- # true 00:19:57.969 17:23:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.969 Cannot find device "nvmf_tgt_br2" 00:19:57.969 17:23:27 -- nvmf/common.sh@156 -- # true 00:19:57.969 17:23:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.969 17:23:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.969 Cannot find device "nvmf_tgt_br" 00:19:57.969 17:23:27 -- nvmf/common.sh@158 -- # true 00:19:57.969 17:23:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.969 Cannot find device "nvmf_tgt_br2" 00:19:57.969 17:23:27 -- nvmf/common.sh@159 -- # true 00:19:57.969 17:23:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:58.229 17:23:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:58.229 17:23:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.229 17:23:28 -- nvmf/common.sh@162 -- # true 00:19:58.229 17:23:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.229 17:23:28 -- nvmf/common.sh@163 -- # true 00:19:58.229 17:23:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.229 17:23:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.229 17:23:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.229 17:23:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.229 17:23:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.229 17:23:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.229 17:23:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.229 17:23:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:58.229 17:23:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:58.229 17:23:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:58.229 17:23:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:58.229 17:23:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:58.229 17:23:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:58.229 17:23:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.229 17:23:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.229 17:23:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.229 17:23:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:58.229 17:23:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:58.229 17:23:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.229 17:23:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.229 17:23:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.229 17:23:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.229 17:23:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.229 17:23:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:58.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:19:58.229 00:19:58.229 --- 10.0.0.2 ping statistics --- 00:19:58.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.229 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:58.229 17:23:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:58.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:58.229 00:19:58.229 --- 10.0.0.3 ping statistics --- 00:19:58.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.229 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:58.229 17:23:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:58.229 00:19:58.229 --- 10.0.0.1 ping statistics --- 00:19:58.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.229 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:58.229 17:23:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.229 17:23:28 -- nvmf/common.sh@422 -- # return 0 00:19:58.229 17:23:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:58.229 17:23:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.229 17:23:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:58.229 17:23:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:58.229 17:23:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.229 17:23:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:58.229 17:23:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:58.488 17:23:28 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:58.488 17:23:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:58.488 17:23:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:58.488 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.488 17:23:28 -- nvmf/common.sh@470 -- # nvmfpid=86577 00:19:58.488 17:23:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:58.488 17:23:28 -- nvmf/common.sh@471 -- # waitforlisten 86577 00:19:58.488 17:23:28 -- common/autotest_common.sh@817 -- # '[' -z 86577 ']' 00:19:58.488 17:23:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.488 17:23:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.488 17:23:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.488 17:23:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.488 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.488 [2024-04-25 17:23:28.276196] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:19:58.488 [2024-04-25 17:23:28.276293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.488 [2024-04-25 17:23:28.417399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.747 [2024-04-25 17:23:28.468043] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.747 [2024-04-25 17:23:28.468090] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.747 [2024-04-25 17:23:28.468117] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.747 [2024-04-25 17:23:28.468124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.747 [2024-04-25 17:23:28.468131] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.747 [2024-04-25 17:23:28.468163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.747 17:23:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.747 17:23:28 -- common/autotest_common.sh@850 -- # return 0 00:19:58.747 17:23:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.747 17:23:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 17:23:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.747 17:23:28 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 [2024-04-25 17:23:28.595590] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 null0 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e6a95cac665c409ea31c6db5182e51ec 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.747 [2024-04-25 17:23:28.639695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.747 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.747 17:23:28 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:58.747 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.747 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:59.005 nvme0n1 00:19:59.005 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.005 17:23:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:59.005 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.005 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:59.005 [ 00:19:59.005 { 00:19:59.005 "aliases": [ 00:19:59.005 "e6a95cac-665c-409e-a31c-6db5182e51ec" 00:19:59.005 ], 00:19:59.005 "assigned_rate_limits": { 00:19:59.005 "r_mbytes_per_sec": 0, 00:19:59.005 "rw_ios_per_sec": 0, 00:19:59.005 "rw_mbytes_per_sec": 0, 00:19:59.005 "w_mbytes_per_sec": 0 00:19:59.005 }, 00:19:59.005 "block_size": 512, 00:19:59.005 "claimed": false, 00:19:59.005 "driver_specific": { 00:19:59.005 "mp_policy": "active_passive", 00:19:59.005 "nvme": [ 00:19:59.005 { 00:19:59.005 "ctrlr_data": { 00:19:59.005 "ana_reporting": false, 00:19:59.005 "cntlid": 1, 00:19:59.005 "firmware_revision": "24.05", 00:19:59.005 "model_number": "SPDK bdev Controller", 00:19:59.005 "multi_ctrlr": true, 00:19:59.005 "oacs": { 00:19:59.005 "firmware": 0, 00:19:59.005 "format": 0, 00:19:59.005 "ns_manage": 0, 00:19:59.005 "security": 0 00:19:59.005 }, 00:19:59.005 "serial_number": "00000000000000000000", 00:19:59.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.005 "vendor_id": "0x8086" 00:19:59.005 }, 00:19:59.005 "ns_data": { 00:19:59.005 "can_share": true, 00:19:59.005 "id": 1 00:19:59.005 }, 00:19:59.005 "trid": { 00:19:59.005 "adrfam": "IPv4", 00:19:59.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.005 "traddr": "10.0.0.2", 00:19:59.005 "trsvcid": "4420", 00:19:59.005 "trtype": "TCP" 00:19:59.005 }, 00:19:59.005 "vs": { 00:19:59.005 "nvme_version": "1.3" 00:19:59.005 } 00:19:59.005 } 00:19:59.005 ] 00:19:59.005 }, 00:19:59.005 "memory_domains": [ 00:19:59.005 { 00:19:59.005 "dma_device_id": "system", 00:19:59.005 "dma_device_type": 1 00:19:59.005 } 00:19:59.005 ], 00:19:59.005 "name": "nvme0n1", 00:19:59.005 "num_blocks": 2097152, 00:19:59.005 "product_name": "NVMe disk", 00:19:59.005 "supported_io_types": { 00:19:59.005 "abort": true, 00:19:59.005 "compare": true, 00:19:59.005 "compare_and_write": true, 00:19:59.005 "flush": true, 00:19:59.005 "nvme_admin": true, 00:19:59.005 "nvme_io": true, 00:19:59.005 "read": true, 00:19:59.005 "reset": true, 00:19:59.005 "unmap": false, 00:19:59.005 "write": true, 00:19:59.005 "write_zeroes": true 00:19:59.005 }, 00:19:59.005 "uuid": "e6a95cac-665c-409e-a31c-6db5182e51ec", 00:19:59.005 "zoned": false 00:19:59.005 } 00:19:59.005 ] 00:19:59.005 17:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.005 17:23:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:59.005 17:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.005 17:23:28 -- common/autotest_common.sh@10 -- # set +x 00:19:59.005 [2024-04-25 17:23:28.920012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:59.005 [2024-04-25 17:23:28.920233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2138c10 (9): Bad file descriptor 00:19:59.264 [2024-04-25 17:23:29.052868] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:59.264 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.264 17:23:29 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:59.264 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.264 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.264 [ 00:19:59.264 { 00:19:59.264 "aliases": [ 00:19:59.264 "e6a95cac-665c-409e-a31c-6db5182e51ec" 00:19:59.264 ], 00:19:59.264 "assigned_rate_limits": { 00:19:59.264 "r_mbytes_per_sec": 0, 00:19:59.264 "rw_ios_per_sec": 0, 00:19:59.264 "rw_mbytes_per_sec": 0, 00:19:59.264 "w_mbytes_per_sec": 0 00:19:59.264 }, 00:19:59.264 "block_size": 512, 00:19:59.264 "claimed": false, 00:19:59.264 "driver_specific": { 00:19:59.264 "mp_policy": "active_passive", 00:19:59.264 "nvme": [ 00:19:59.264 { 00:19:59.264 "ctrlr_data": { 00:19:59.264 "ana_reporting": false, 00:19:59.264 "cntlid": 2, 00:19:59.264 "firmware_revision": "24.05", 00:19:59.264 "model_number": "SPDK bdev Controller", 00:19:59.264 "multi_ctrlr": true, 00:19:59.264 "oacs": { 00:19:59.264 "firmware": 0, 00:19:59.264 "format": 0, 00:19:59.264 "ns_manage": 0, 00:19:59.264 "security": 0 00:19:59.264 }, 00:19:59.264 "serial_number": "00000000000000000000", 00:19:59.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.264 "vendor_id": "0x8086" 00:19:59.264 }, 00:19:59.264 "ns_data": { 00:19:59.264 "can_share": true, 00:19:59.264 "id": 1 00:19:59.264 }, 00:19:59.264 "trid": { 00:19:59.264 "adrfam": "IPv4", 00:19:59.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.264 "traddr": "10.0.0.2", 00:19:59.264 "trsvcid": "4420", 00:19:59.264 "trtype": "TCP" 00:19:59.264 }, 00:19:59.264 "vs": { 00:19:59.264 "nvme_version": "1.3" 00:19:59.264 } 00:19:59.264 } 00:19:59.264 ] 00:19:59.264 }, 00:19:59.264 "memory_domains": [ 00:19:59.264 { 00:19:59.264 "dma_device_id": "system", 00:19:59.264 "dma_device_type": 1 00:19:59.264 } 00:19:59.264 ], 00:19:59.264 "name": "nvme0n1", 00:19:59.264 "num_blocks": 2097152, 00:19:59.264 "product_name": "NVMe disk", 00:19:59.264 "supported_io_types": { 00:19:59.264 "abort": true, 00:19:59.264 "compare": true, 00:19:59.264 "compare_and_write": true, 00:19:59.264 "flush": true, 00:19:59.264 "nvme_admin": true, 00:19:59.264 "nvme_io": true, 00:19:59.264 "read": true, 00:19:59.264 "reset": true, 00:19:59.264 "unmap": false, 00:19:59.264 "write": true, 00:19:59.264 "write_zeroes": true 00:19:59.264 }, 00:19:59.264 "uuid": "e6a95cac-665c-409e-a31c-6db5182e51ec", 00:19:59.264 "zoned": false 00:19:59.264 } 00:19:59.264 ] 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@53 -- # mktemp 00:19:59.265 17:23:29 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8Gx2RcG6Cm 00:19:59.265 17:23:29 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:59.265 17:23:29 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8Gx2RcG6Cm 00:19:59.265 17:23:29 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 [2024-04-25 17:23:29.128227] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.265 [2024-04-25 17:23:29.128521] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Gx2RcG6Cm 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 [2024-04-25 17:23:29.136225] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Gx2RcG6Cm 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 [2024-04-25 17:23:29.144209] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.265 [2024-04-25 17:23:29.144286] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.265 nvme0n1 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.265 [ 00:19:59.265 { 00:19:59.265 "aliases": [ 00:19:59.265 "e6a95cac-665c-409e-a31c-6db5182e51ec" 00:19:59.265 ], 00:19:59.265 "assigned_rate_limits": { 00:19:59.265 "r_mbytes_per_sec": 0, 00:19:59.265 "rw_ios_per_sec": 0, 00:19:59.265 "rw_mbytes_per_sec": 0, 00:19:59.265 "w_mbytes_per_sec": 0 00:19:59.265 }, 00:19:59.265 "block_size": 512, 00:19:59.265 "claimed": false, 00:19:59.265 "driver_specific": { 00:19:59.265 "mp_policy": "active_passive", 00:19:59.265 "nvme": [ 00:19:59.265 { 00:19:59.265 "ctrlr_data": { 00:19:59.265 "ana_reporting": false, 00:19:59.265 "cntlid": 3, 00:19:59.265 "firmware_revision": "24.05", 00:19:59.265 "model_number": "SPDK bdev Controller", 00:19:59.265 "multi_ctrlr": true, 00:19:59.265 "oacs": { 00:19:59.265 "firmware": 0, 00:19:59.265 "format": 0, 00:19:59.265 "ns_manage": 0, 00:19:59.265 "security": 0 00:19:59.265 }, 00:19:59.265 "serial_number": "00000000000000000000", 00:19:59.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.265 "vendor_id": "0x8086" 00:19:59.265 }, 00:19:59.265 "ns_data": { 00:19:59.265 "can_share": true, 00:19:59.265 "id": 1 00:19:59.265 }, 00:19:59.265 "trid": { 00:19:59.265 "adrfam": "IPv4", 00:19:59.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:59.265 "traddr": "10.0.0.2", 00:19:59.265 "trsvcid": "4421", 00:19:59.265 "trtype": "TCP" 00:19:59.265 }, 00:19:59.265 "vs": { 00:19:59.265 "nvme_version": "1.3" 00:19:59.265 } 00:19:59.265 } 00:19:59.265 ] 00:19:59.265 }, 00:19:59.265 "memory_domains": [ 00:19:59.265 { 00:19:59.265 "dma_device_id": "system", 00:19:59.265 "dma_device_type": 1 00:19:59.265 } 00:19:59.265 ], 00:19:59.265 "name": "nvme0n1", 00:19:59.265 "num_blocks": 2097152, 00:19:59.265 "product_name": "NVMe disk", 00:19:59.265 "supported_io_types": { 00:19:59.265 "abort": true, 00:19:59.265 "compare": true, 00:19:59.265 "compare_and_write": true, 00:19:59.265 "flush": true, 00:19:59.265 "nvme_admin": true, 00:19:59.265 "nvme_io": true, 00:19:59.265 "read": true, 00:19:59.265 "reset": true, 00:19:59.265 "unmap": false, 00:19:59.265 "write": true, 00:19:59.265 "write_zeroes": true 00:19:59.265 }, 00:19:59.265 "uuid": "e6a95cac-665c-409e-a31c-6db5182e51ec", 00:19:59.265 "zoned": false 00:19:59.265 } 00:19:59.265 ] 00:19:59.265 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.265 17:23:29 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.265 17:23:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.265 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.525 17:23:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.525 17:23:29 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8Gx2RcG6Cm 00:19:59.525 17:23:29 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:59.525 17:23:29 -- host/async_init.sh@78 -- # nvmftestfini 00:19:59.525 17:23:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:59.525 17:23:29 -- nvmf/common.sh@117 -- # sync 00:19:59.525 17:23:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.525 17:23:29 -- nvmf/common.sh@120 -- # set +e 00:19:59.525 17:23:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.525 17:23:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.525 rmmod nvme_tcp 00:19:59.525 rmmod nvme_fabrics 00:19:59.525 rmmod nvme_keyring 00:19:59.525 17:23:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.525 17:23:29 -- nvmf/common.sh@124 -- # set -e 00:19:59.525 17:23:29 -- nvmf/common.sh@125 -- # return 0 00:19:59.525 17:23:29 -- nvmf/common.sh@478 -- # '[' -n 86577 ']' 00:19:59.525 17:23:29 -- nvmf/common.sh@479 -- # killprocess 86577 00:19:59.525 17:23:29 -- common/autotest_common.sh@936 -- # '[' -z 86577 ']' 00:19:59.525 17:23:29 -- common/autotest_common.sh@940 -- # kill -0 86577 00:19:59.525 17:23:29 -- common/autotest_common.sh@941 -- # uname 00:19:59.525 17:23:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.525 17:23:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86577 00:19:59.525 killing process with pid 86577 00:19:59.525 17:23:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.525 17:23:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.525 17:23:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86577' 00:19:59.525 17:23:29 -- common/autotest_common.sh@955 -- # kill 86577 00:19:59.525 [2024-04-25 17:23:29.380559] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:59.525 [2024-04-25 17:23:29.380618] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.525 17:23:29 -- common/autotest_common.sh@960 -- # wait 86577 00:19:59.785 17:23:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:59.785 17:23:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:59.785 17:23:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:59.785 17:23:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.785 17:23:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.785 17:23:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.785 17:23:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.785 17:23:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.785 17:23:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.785 ************************************ 00:19:59.785 END TEST nvmf_async_init 00:19:59.785 ************************************ 00:19:59.785 00:19:59.785 real 0m1.823s 00:19:59.785 user 0m1.527s 00:19:59.785 sys 0m0.495s 00:19:59.785 17:23:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.785 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 17:23:29 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:59.785 17:23:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.785 17:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.785 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.785 ************************************ 00:19:59.785 START TEST dma 00:19:59.785 ************************************ 00:19:59.785 17:23:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:59.785 * Looking for test storage... 00:20:00.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.044 17:23:29 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.044 17:23:29 -- nvmf/common.sh@7 -- # uname -s 00:20:00.044 17:23:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.044 17:23:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.044 17:23:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.044 17:23:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.044 17:23:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.044 17:23:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.044 17:23:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.044 17:23:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.044 17:23:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.044 17:23:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.044 17:23:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:00.044 17:23:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:00.044 17:23:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.044 17:23:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.044 17:23:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.044 17:23:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.044 17:23:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.044 17:23:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.044 17:23:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.044 17:23:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.044 17:23:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.044 17:23:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.044 17:23:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.044 17:23:29 -- paths/export.sh@5 -- # export PATH 00:20:00.044 17:23:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.044 17:23:29 -- nvmf/common.sh@47 -- # : 0 00:20:00.044 17:23:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.044 17:23:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.044 17:23:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.044 17:23:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.044 17:23:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.044 17:23:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.044 17:23:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.044 17:23:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.044 17:23:29 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:00.044 17:23:29 -- host/dma.sh@13 -- # exit 0 00:20:00.044 00:20:00.044 real 0m0.109s 00:20:00.044 user 0m0.039s 00:20:00.044 sys 0m0.073s 00:20:00.044 ************************************ 00:20:00.044 END TEST dma 00:20:00.044 ************************************ 00:20:00.044 17:23:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:00.044 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.044 17:23:29 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:00.044 17:23:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:00.045 17:23:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.045 17:23:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.045 ************************************ 00:20:00.045 START TEST nvmf_identify 00:20:00.045 ************************************ 00:20:00.045 17:23:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:00.045 * Looking for test storage... 00:20:00.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.045 17:23:29 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.045 17:23:29 -- nvmf/common.sh@7 -- # uname -s 00:20:00.045 17:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.045 17:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.045 17:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.045 17:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.045 17:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.045 17:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.045 17:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.045 17:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.045 17:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.045 17:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.045 17:23:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:00.045 17:23:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:00.045 17:23:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.045 17:23:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.045 17:23:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.045 17:23:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.045 17:23:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.045 17:23:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.045 17:23:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.045 17:23:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.045 17:23:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.045 17:23:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.045 17:23:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.045 17:23:30 -- paths/export.sh@5 -- # export PATH 00:20:00.045 17:23:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.045 17:23:30 -- nvmf/common.sh@47 -- # : 0 00:20:00.045 17:23:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.045 17:23:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.045 17:23:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.045 17:23:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.045 17:23:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.045 17:23:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.045 17:23:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.045 17:23:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.304 17:23:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.304 17:23:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.304 17:23:30 -- host/identify.sh@14 -- # nvmftestinit 00:20:00.304 17:23:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:00.304 17:23:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.304 17:23:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:00.304 17:23:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:00.304 17:23:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:00.304 17:23:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.304 17:23:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.304 17:23:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.304 17:23:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:00.304 17:23:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:00.304 17:23:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:00.304 17:23:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:00.304 17:23:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:00.304 17:23:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:00.304 17:23:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.304 17:23:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.304 17:23:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.304 17:23:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:00.304 17:23:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.304 17:23:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.304 17:23:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.304 17:23:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.304 17:23:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.304 17:23:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.304 17:23:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.304 17:23:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.304 17:23:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:00.304 17:23:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:00.304 Cannot find device "nvmf_tgt_br" 00:20:00.304 17:23:30 -- nvmf/common.sh@155 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.304 Cannot find device "nvmf_tgt_br2" 00:20:00.304 17:23:30 -- nvmf/common.sh@156 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:00.304 17:23:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:00.304 Cannot find device "nvmf_tgt_br" 00:20:00.304 17:23:30 -- nvmf/common.sh@158 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:00.304 Cannot find device "nvmf_tgt_br2" 00:20:00.304 17:23:30 -- nvmf/common.sh@159 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.304 17:23:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.304 17:23:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.304 17:23:30 -- nvmf/common.sh@162 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.304 17:23:30 -- nvmf/common.sh@163 -- # true 00:20:00.304 17:23:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.304 17:23:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.304 17:23:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.304 17:23:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.304 17:23:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.304 17:23:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.304 17:23:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.304 17:23:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.304 17:23:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.304 17:23:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.304 17:23:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.304 17:23:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.304 17:23:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.304 17:23:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.304 17:23:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.304 17:23:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.304 17:23:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.563 17:23:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.563 17:23:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.563 17:23:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.563 17:23:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.563 17:23:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.563 17:23:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.563 17:23:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:00.563 00:20:00.563 --- 10.0.0.2 ping statistics --- 00:20:00.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.563 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:00.563 17:23:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:00.564 00:20:00.564 --- 10.0.0.3 ping statistics --- 00:20:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.564 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:00.564 17:23:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:00.564 00:20:00.564 --- 10.0.0.1 ping statistics --- 00:20:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.564 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:00.564 17:23:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.564 17:23:30 -- nvmf/common.sh@422 -- # return 0 00:20:00.564 17:23:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.564 17:23:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.564 17:23:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.564 17:23:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.564 17:23:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.564 17:23:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.564 17:23:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:00.564 17:23:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:00.564 17:23:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:00.564 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.564 17:23:30 -- host/identify.sh@19 -- # nvmfpid=86839 00:20:00.564 17:23:30 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.564 17:23:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.564 17:23:30 -- host/identify.sh@23 -- # waitforlisten 86839 00:20:00.564 17:23:30 -- common/autotest_common.sh@817 -- # '[' -z 86839 ']' 00:20:00.564 17:23:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.564 17:23:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.564 17:23:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.564 17:23:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.564 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.564 [2024-04-25 17:23:30.447527] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:00.564 [2024-04-25 17:23:30.447803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.823 [2024-04-25 17:23:30.589336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.823 [2024-04-25 17:23:30.662754] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.823 [2024-04-25 17:23:30.663063] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.823 [2024-04-25 17:23:30.663090] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.823 [2024-04-25 17:23:30.663101] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.823 [2024-04-25 17:23:30.663110] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.823 [2024-04-25 17:23:30.663295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.823 [2024-04-25 17:23:30.663993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.823 [2024-04-25 17:23:30.664136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.823 [2024-04-25 17:23:30.664145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.761 17:23:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.761 17:23:31 -- common/autotest_common.sh@850 -- # return 0 00:20:01.761 17:23:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 [2024-04-25 17:23:31.445065] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:01.761 17:23:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 17:23:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 Malloc0 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 [2024-04-25 17:23:31.538341] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:01.761 17:23:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.761 17:23:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.761 [2024-04-25 17:23:31.554166] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:01.761 [ 00:20:01.761 { 00:20:01.761 "allow_any_host": true, 00:20:01.761 "hosts": [], 00:20:01.761 "listen_addresses": [ 00:20:01.761 { 00:20:01.761 "adrfam": "IPv4", 00:20:01.761 "traddr": "10.0.0.2", 00:20:01.761 "transport": "TCP", 00:20:01.761 "trsvcid": "4420", 00:20:01.761 "trtype": "TCP" 00:20:01.761 } 00:20:01.761 ], 00:20:01.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:01.761 "subtype": "Discovery" 00:20:01.761 }, 00:20:01.761 { 00:20:01.761 "allow_any_host": true, 00:20:01.761 "hosts": [], 00:20:01.761 "listen_addresses": [ 00:20:01.761 { 00:20:01.761 "adrfam": "IPv4", 00:20:01.761 "traddr": "10.0.0.2", 00:20:01.761 "transport": "TCP", 00:20:01.761 "trsvcid": "4420", 00:20:01.761 "trtype": "TCP" 00:20:01.761 } 00:20:01.761 ], 00:20:01.761 "max_cntlid": 65519, 00:20:01.761 "max_namespaces": 32, 00:20:01.761 "min_cntlid": 1, 00:20:01.761 "model_number": "SPDK bdev Controller", 00:20:01.761 "namespaces": [ 00:20:01.761 { 00:20:01.761 "bdev_name": "Malloc0", 00:20:01.761 "eui64": "ABCDEF0123456789", 00:20:01.761 "name": "Malloc0", 00:20:01.761 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:01.761 "nsid": 1, 00:20:01.761 "uuid": "dfa0e075-0c40-48a4-9d7d-5acc52e099f7" 00:20:01.761 } 00:20:01.761 ], 00:20:01.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.761 "serial_number": "SPDK00000000000001", 00:20:01.761 "subtype": "NVMe" 00:20:01.761 } 00:20:01.761 ] 00:20:01.761 17:23:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.761 17:23:31 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:01.761 [2024-04-25 17:23:31.594863] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:01.761 [2024-04-25 17:23:31.595055] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86892 ] 00:20:01.761 [2024-04-25 17:23:31.736381] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:01.761 [2024-04-25 17:23:31.736446] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:01.761 [2024-04-25 17:23:31.736453] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:01.761 [2024-04-25 17:23:31.736467] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:01.761 [2024-04-25 17:23:31.736475] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:01.761 [2024-04-25 17:23:31.736608] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:01.761 [2024-04-25 17:23:31.736670] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x65d360 0 00:20:02.023 [2024-04-25 17:23:31.746777] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:02.023 [2024-04-25 17:23:31.746803] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:02.023 [2024-04-25 17:23:31.746825] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:02.023 [2024-04-25 17:23:31.746829] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:02.023 [2024-04-25 17:23:31.746871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.023 [2024-04-25 17:23:31.746878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.023 [2024-04-25 17:23:31.746882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.023 [2024-04-25 17:23:31.746895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:02.023 [2024-04-25 17:23:31.746934] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.023 [2024-04-25 17:23:31.753825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.023 [2024-04-25 17:23:31.753845] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.023 [2024-04-25 17:23:31.753866] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.023 [2024-04-25 17:23:31.753871] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.023 [2024-04-25 17:23:31.753882] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:02.023 [2024-04-25 17:23:31.753890] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:02.024 [2024-04-25 17:23:31.753896] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:02.024 [2024-04-25 17:23:31.753913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.753918] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.753922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.753931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.753960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754031] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754053] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754067] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:02.024 [2024-04-25 17:23:31.754075] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:02.024 [2024-04-25 17:23:31.754097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754101] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754147] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754207] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754220] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:02.024 [2024-04-25 17:23:31.754228] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754239] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754242] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754274] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754325] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754335] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754339] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754355] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754444] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754447] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754451] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754456] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:02.024 [2024-04-25 17:23:31.754461] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754574] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:02.024 [2024-04-25 17:23:31.754580] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754589] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754593] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754622] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754685] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754689] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754692] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:02.024 [2024-04-25 17:23:31.754707] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754730] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754773] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.754860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.754876] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.754881] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754885] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.754891] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:02.024 [2024-04-25 17:23:31.754897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.754906] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:02.024 [2024-04-25 17:23:31.754922] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.754935] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.754941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.754949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.754972] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.755061] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.024 [2024-04-25 17:23:31.755069] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.024 [2024-04-25 17:23:31.755073] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755077] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x65d360): datao=0, datal=4096, cccid=0 00:20:02.024 [2024-04-25 17:23:31.755082] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a5a20) on tqpair(0x65d360): expected_datao=0, payload_size=4096 00:20:02.024 [2024-04-25 17:23:31.755087] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755095] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755100] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.755129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.755147] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755151] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.755160] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:02.024 [2024-04-25 17:23:31.755166] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:02.024 [2024-04-25 17:23:31.755171] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:02.024 [2024-04-25 17:23:31.755176] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:02.024 [2024-04-25 17:23:31.755196] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:02.024 [2024-04-25 17:23:31.755201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.755210] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.755217] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755222] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755225] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.024 [2024-04-25 17:23:31.755252] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.755314] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.755320] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.755324] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755327] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5a20) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.755335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755343] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.024 [2024-04-25 17:23:31.755356] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.024 [2024-04-25 17:23:31.755374] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755378] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.024 [2024-04-25 17:23:31.755393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.024 [2024-04-25 17:23:31.755411] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.755423] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:02.024 [2024-04-25 17:23:31.755430] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.755461] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5a20, cid 0, qid 0 00:20:02.024 [2024-04-25 17:23:31.755467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5b80, cid 1, qid 0 00:20:02.024 [2024-04-25 17:23:31.755472] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5ce0, cid 2, qid 0 00:20:02.024 [2024-04-25 17:23:31.755476] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.024 [2024-04-25 17:23:31.755481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5fa0, cid 4, qid 0 00:20:02.024 [2024-04-25 17:23:31.755571] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.755577] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.755581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755584] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5fa0) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.755590] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:02.024 [2024-04-25 17:23:31.755595] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:02.024 [2024-04-25 17:23:31.755606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.755636] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5fa0, cid 4, qid 0 00:20:02.024 [2024-04-25 17:23:31.755696] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.024 [2024-04-25 17:23:31.755703] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.024 [2024-04-25 17:23:31.755706] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755710] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x65d360): datao=0, datal=4096, cccid=4 00:20:02.024 [2024-04-25 17:23:31.755730] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a5fa0) on tqpair(0x65d360): expected_datao=0, payload_size=4096 00:20:02.024 [2024-04-25 17:23:31.755735] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755742] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755746] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755754] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.024 [2024-04-25 17:23:31.755788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.024 [2024-04-25 17:23:31.755794] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755798] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5fa0) on tqpair=0x65d360 00:20:02.024 [2024-04-25 17:23:31.755812] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:02.024 [2024-04-25 17:23:31.755833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755839] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.024 [2024-04-25 17:23:31.755855] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755859] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.755863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x65d360) 00:20:02.024 [2024-04-25 17:23:31.755869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.024 [2024-04-25 17:23:31.755898] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5fa0, cid 4, qid 0 00:20:02.024 [2024-04-25 17:23:31.755907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a6100, cid 5, qid 0 00:20:02.024 [2024-04-25 17:23:31.756004] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.024 [2024-04-25 17:23:31.756011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.024 [2024-04-25 17:23:31.756015] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.024 [2024-04-25 17:23:31.756019] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x65d360): datao=0, datal=1024, cccid=4 00:20:02.024 [2024-04-25 17:23:31.756024] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a5fa0) on tqpair(0x65d360): expected_datao=0, payload_size=1024 00:20:02.025 [2024-04-25 17:23:31.756029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.756036] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.756040] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.756046] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.756052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.756055] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.756059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a6100) on tqpair=0x65d360 00:20:02.025 [2024-04-25 17:23:31.800774] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.800805] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.800811] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.800815] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5fa0) on tqpair=0x65d360 00:20:02.025 [2024-04-25 17:23:31.800838] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.800844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x65d360) 00:20:02.025 [2024-04-25 17:23:31.800854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.025 [2024-04-25 17:23:31.800890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5fa0, cid 4, qid 0 00:20:02.025 [2024-04-25 17:23:31.800991] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.025 [2024-04-25 17:23:31.800999] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.025 [2024-04-25 17:23:31.801003] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801007] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x65d360): datao=0, datal=3072, cccid=4 00:20:02.025 [2024-04-25 17:23:31.801012] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a5fa0) on tqpair(0x65d360): expected_datao=0, payload_size=3072 00:20:02.025 [2024-04-25 17:23:31.801016] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801024] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801028] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801037] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.801043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.801047] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801051] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5fa0) on tqpair=0x65d360 00:20:02.025 [2024-04-25 17:23:31.801077] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x65d360) 00:20:02.025 [2024-04-25 17:23:31.801104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.025 [2024-04-25 17:23:31.801130] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5fa0, cid 4, qid 0 00:20:02.025 [2024-04-25 17:23:31.801196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.025 [2024-04-25 17:23:31.801203] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.025 [2024-04-25 17:23:31.801206] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801210] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x65d360): datao=0, datal=8, cccid=4 00:20:02.025 [2024-04-25 17:23:31.801215] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6a5fa0) on tqpair(0x65d360): expected_datao=0, payload_size=8 00:20:02.025 [2024-04-25 17:23:31.801219] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801225] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.801229] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.841772] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.841791] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.841796] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.841800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5fa0) on tqpair=0x65d360 00:20:02.025 ===================================================== 00:20:02.025 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:02.025 ===================================================== 00:20:02.025 Controller Capabilities/Features 00:20:02.025 ================================ 00:20:02.025 Vendor ID: 0000 00:20:02.025 Subsystem Vendor ID: 0000 00:20:02.025 Serial Number: .................... 00:20:02.025 Model Number: ........................................ 00:20:02.025 Firmware Version: 24.05 00:20:02.025 Recommended Arb Burst: 0 00:20:02.025 IEEE OUI Identifier: 00 00 00 00:20:02.025 Multi-path I/O 00:20:02.025 May have multiple subsystem ports: No 00:20:02.025 May have multiple controllers: No 00:20:02.025 Associated with SR-IOV VF: No 00:20:02.025 Max Data Transfer Size: 131072 00:20:02.025 Max Number of Namespaces: 0 00:20:02.025 Max Number of I/O Queues: 1024 00:20:02.025 NVMe Specification Version (VS): 1.3 00:20:02.025 NVMe Specification Version (Identify): 1.3 00:20:02.025 Maximum Queue Entries: 128 00:20:02.025 Contiguous Queues Required: Yes 00:20:02.025 Arbitration Mechanisms Supported 00:20:02.025 Weighted Round Robin: Not Supported 00:20:02.025 Vendor Specific: Not Supported 00:20:02.025 Reset Timeout: 15000 ms 00:20:02.025 Doorbell Stride: 4 bytes 00:20:02.025 NVM Subsystem Reset: Not Supported 00:20:02.025 Command Sets Supported 00:20:02.025 NVM Command Set: Supported 00:20:02.025 Boot Partition: Not Supported 00:20:02.025 Memory Page Size Minimum: 4096 bytes 00:20:02.025 Memory Page Size Maximum: 4096 bytes 00:20:02.025 Persistent Memory Region: Not Supported 00:20:02.025 Optional Asynchronous Events Supported 00:20:02.025 Namespace Attribute Notices: Not Supported 00:20:02.025 Firmware Activation Notices: Not Supported 00:20:02.025 ANA Change Notices: Not Supported 00:20:02.025 PLE Aggregate Log Change Notices: Not Supported 00:20:02.025 LBA Status Info Alert Notices: Not Supported 00:20:02.025 EGE Aggregate Log Change Notices: Not Supported 00:20:02.025 Normal NVM Subsystem Shutdown event: Not Supported 00:20:02.025 Zone Descriptor Change Notices: Not Supported 00:20:02.025 Discovery Log Change Notices: Supported 00:20:02.025 Controller Attributes 00:20:02.025 128-bit Host Identifier: Not Supported 00:20:02.025 Non-Operational Permissive Mode: Not Supported 00:20:02.025 NVM Sets: Not Supported 00:20:02.025 Read Recovery Levels: Not Supported 00:20:02.025 Endurance Groups: Not Supported 00:20:02.025 Predictable Latency Mode: Not Supported 00:20:02.025 Traffic Based Keep ALive: Not Supported 00:20:02.025 Namespace Granularity: Not Supported 00:20:02.025 SQ Associations: Not Supported 00:20:02.025 UUID List: Not Supported 00:20:02.025 Multi-Domain Subsystem: Not Supported 00:20:02.025 Fixed Capacity Management: Not Supported 00:20:02.025 Variable Capacity Management: Not Supported 00:20:02.025 Delete Endurance Group: Not Supported 00:20:02.025 Delete NVM Set: Not Supported 00:20:02.025 Extended LBA Formats Supported: Not Supported 00:20:02.025 Flexible Data Placement Supported: Not Supported 00:20:02.025 00:20:02.025 Controller Memory Buffer Support 00:20:02.025 ================================ 00:20:02.025 Supported: No 00:20:02.025 00:20:02.025 Persistent Memory Region Support 00:20:02.025 ================================ 00:20:02.025 Supported: No 00:20:02.025 00:20:02.025 Admin Command Set Attributes 00:20:02.025 ============================ 00:20:02.025 Security Send/Receive: Not Supported 00:20:02.025 Format NVM: Not Supported 00:20:02.025 Firmware Activate/Download: Not Supported 00:20:02.025 Namespace Management: Not Supported 00:20:02.025 Device Self-Test: Not Supported 00:20:02.025 Directives: Not Supported 00:20:02.025 NVMe-MI: Not Supported 00:20:02.025 Virtualization Management: Not Supported 00:20:02.025 Doorbell Buffer Config: Not Supported 00:20:02.025 Get LBA Status Capability: Not Supported 00:20:02.025 Command & Feature Lockdown Capability: Not Supported 00:20:02.025 Abort Command Limit: 1 00:20:02.025 Async Event Request Limit: 4 00:20:02.025 Number of Firmware Slots: N/A 00:20:02.025 Firmware Slot 1 Read-Only: N/A 00:20:02.025 Firmware Activation Without Reset: N/A 00:20:02.025 Multiple Update Detection Support: N/A 00:20:02.025 Firmware Update Granularity: No Information Provided 00:20:02.025 Per-Namespace SMART Log: No 00:20:02.025 Asymmetric Namespace Access Log Page: Not Supported 00:20:02.025 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:02.025 Command Effects Log Page: Not Supported 00:20:02.025 Get Log Page Extended Data: Supported 00:20:02.025 Telemetry Log Pages: Not Supported 00:20:02.025 Persistent Event Log Pages: Not Supported 00:20:02.025 Supported Log Pages Log Page: May Support 00:20:02.025 Commands Supported & Effects Log Page: Not Supported 00:20:02.025 Feature Identifiers & Effects Log Page:May Support 00:20:02.025 NVMe-MI Commands & Effects Log Page: May Support 00:20:02.025 Data Area 4 for Telemetry Log: Not Supported 00:20:02.025 Error Log Page Entries Supported: 128 00:20:02.025 Keep Alive: Not Supported 00:20:02.025 00:20:02.025 NVM Command Set Attributes 00:20:02.025 ========================== 00:20:02.025 Submission Queue Entry Size 00:20:02.025 Max: 1 00:20:02.025 Min: 1 00:20:02.025 Completion Queue Entry Size 00:20:02.025 Max: 1 00:20:02.025 Min: 1 00:20:02.025 Number of Namespaces: 0 00:20:02.025 Compare Command: Not Supported 00:20:02.025 Write Uncorrectable Command: Not Supported 00:20:02.025 Dataset Management Command: Not Supported 00:20:02.025 Write Zeroes Command: Not Supported 00:20:02.025 Set Features Save Field: Not Supported 00:20:02.025 Reservations: Not Supported 00:20:02.025 Timestamp: Not Supported 00:20:02.025 Copy: Not Supported 00:20:02.025 Volatile Write Cache: Not Present 00:20:02.025 Atomic Write Unit (Normal): 1 00:20:02.025 Atomic Write Unit (PFail): 1 00:20:02.025 Atomic Compare & Write Unit: 1 00:20:02.025 Fused Compare & Write: Supported 00:20:02.025 Scatter-Gather List 00:20:02.025 SGL Command Set: Supported 00:20:02.025 SGL Keyed: Supported 00:20:02.025 SGL Bit Bucket Descriptor: Not Supported 00:20:02.025 SGL Metadata Pointer: Not Supported 00:20:02.025 Oversized SGL: Not Supported 00:20:02.025 SGL Metadata Address: Not Supported 00:20:02.025 SGL Offset: Supported 00:20:02.025 Transport SGL Data Block: Not Supported 00:20:02.025 Replay Protected Memory Block: Not Supported 00:20:02.025 00:20:02.025 Firmware Slot Information 00:20:02.025 ========================= 00:20:02.025 Active slot: 0 00:20:02.025 00:20:02.025 00:20:02.025 Error Log 00:20:02.025 ========= 00:20:02.025 00:20:02.025 Active Namespaces 00:20:02.025 ================= 00:20:02.025 Discovery Log Page 00:20:02.025 ================== 00:20:02.025 Generation Counter: 2 00:20:02.025 Number of Records: 2 00:20:02.025 Record Format: 0 00:20:02.025 00:20:02.025 Discovery Log Entry 0 00:20:02.025 ---------------------- 00:20:02.025 Transport Type: 3 (TCP) 00:20:02.025 Address Family: 1 (IPv4) 00:20:02.025 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:02.025 Entry Flags: 00:20:02.025 Duplicate Returned Information: 1 00:20:02.025 Explicit Persistent Connection Support for Discovery: 1 00:20:02.025 Transport Requirements: 00:20:02.025 Secure Channel: Not Required 00:20:02.025 Port ID: 0 (0x0000) 00:20:02.025 Controller ID: 65535 (0xffff) 00:20:02.025 Admin Max SQ Size: 128 00:20:02.025 Transport Service Identifier: 4420 00:20:02.025 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:02.025 Transport Address: 10.0.0.2 00:20:02.025 Discovery Log Entry 1 00:20:02.025 ---------------------- 00:20:02.025 Transport Type: 3 (TCP) 00:20:02.025 Address Family: 1 (IPv4) 00:20:02.025 Subsystem Type: 2 (NVM Subsystem) 00:20:02.025 Entry Flags: 00:20:02.025 Duplicate Returned Information: 0 00:20:02.025 Explicit Persistent Connection Support for Discovery: 0 00:20:02.025 Transport Requirements: 00:20:02.025 Secure Channel: Not Required 00:20:02.025 Port ID: 0 (0x0000) 00:20:02.025 Controller ID: 65535 (0xffff) 00:20:02.025 Admin Max SQ Size: 128 00:20:02.025 Transport Service Identifier: 4420 00:20:02.025 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:02.025 Transport Address: 10.0.0.2 [2024-04-25 17:23:31.841895] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:02.025 [2024-04-25 17:23:31.841910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.025 [2024-04-25 17:23:31.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.025 [2024-04-25 17:23:31.841940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.025 [2024-04-25 17:23:31.841946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.025 [2024-04-25 17:23:31.841955] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.841960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.841963] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.025 [2024-04-25 17:23:31.841972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.025 [2024-04-25 17:23:31.841997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.025 [2024-04-25 17:23:31.842048] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.842055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.842058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.842062] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.025 [2024-04-25 17:23:31.842070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.842074] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.842078] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.025 [2024-04-25 17:23:31.842085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.025 [2024-04-25 17:23:31.842107] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.025 [2024-04-25 17:23:31.842177] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.025 [2024-04-25 17:23:31.842184] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.025 [2024-04-25 17:23:31.842187] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.842191] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.025 [2024-04-25 17:23:31.842196] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:02.025 [2024-04-25 17:23:31.842201] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:02.025 [2024-04-25 17:23:31.842210] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.025 [2024-04-25 17:23:31.842214] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842218] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842243] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842302] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842305] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842309] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842320] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842407] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842414] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842424] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842432] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842456] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842505] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842511] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842515] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842518] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842528] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842533] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842536] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842561] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842611] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842618] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842621] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842625] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842635] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842642] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842667] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842716] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842765] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842781] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842786] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842789] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842818] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842869] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842876] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842884] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.842894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842899] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.842910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.842928] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.842981] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.842989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.842993] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.842997] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843007] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843012] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843016] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843041] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843093] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843108] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843112] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843171] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843223] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843233] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843237] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843247] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843251] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843255] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843279] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843333] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843342] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843346] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843356] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843360] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843364] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843388] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843440] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843447] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843450] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843454] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843465] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843469] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843472] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843497] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843551] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843558] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843561] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843565] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843575] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843580] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843667] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843713] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843793] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843801] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843819] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843824] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843828] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843855] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.843906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.843913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.843916] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843920] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.843930] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843935] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.843939] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.843946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.843964] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.844015] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.844022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.844026] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.844030] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.844040] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.844045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.844049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.026 [2024-04-25 17:23:31.844056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.026 [2024-04-25 17:23:31.844089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.026 [2024-04-25 17:23:31.844139] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.026 [2024-04-25 17:23:31.844145] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.026 [2024-04-25 17:23:31.844150] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.026 [2024-04-25 17:23:31.844153] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.026 [2024-04-25 17:23:31.844163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844168] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.844178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.844196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.844247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.844253] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.844266] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844287] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.844297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.844313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.844332] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.844384] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.844390] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.844394] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844397] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.844407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844416] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.844423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.844441] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.844491] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.844498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.844501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.844515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844520] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844524] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.844531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.844549] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.844602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.844609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.844612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.844641] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844646] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.844649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.844656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.844674] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.844723] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.844745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.848782] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.848805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.848821] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.848826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.848830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x65d360) 00:20:02.027 [2024-04-25 17:23:31.848838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.027 [2024-04-25 17:23:31.848863] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6a5e40, cid 3, qid 0 00:20:02.027 [2024-04-25 17:23:31.848923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.027 [2024-04-25 17:23:31.848930] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.027 [2024-04-25 17:23:31.848934] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.027 [2024-04-25 17:23:31.848937] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6a5e40) on tqpair=0x65d360 00:20:02.027 [2024-04-25 17:23:31.848946] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:02.027 00:20:02.027 17:23:31 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:02.027 [2024-04-25 17:23:31.883188] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:02.027 [2024-04-25 17:23:31.883340] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86900 ] 00:20:02.288 [2024-04-25 17:23:32.017353] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:02.288 [2024-04-25 17:23:32.017421] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:02.289 [2024-04-25 17:23:32.017427] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:02.289 [2024-04-25 17:23:32.017437] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:02.289 [2024-04-25 17:23:32.017444] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:02.289 [2024-04-25 17:23:32.017547] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:02.289 [2024-04-25 17:23:32.017605] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf21360 0 00:20:02.289 [2024-04-25 17:23:32.034718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:02.289 [2024-04-25 17:23:32.034738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:02.289 [2024-04-25 17:23:32.034759] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:02.289 [2024-04-25 17:23:32.034763] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:02.289 [2024-04-25 17:23:32.034797] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.034804] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.034808] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.034818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:02.289 [2024-04-25 17:23:32.034847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.042718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.042737] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.042741] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042762] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.042771] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:02.289 [2024-04-25 17:23:32.042777] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:02.289 [2024-04-25 17:23:32.042783] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:02.289 [2024-04-25 17:23:32.042798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042803] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.042815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.042843] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.042900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.042907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.042910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042914] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.042919] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:02.289 [2024-04-25 17:23:32.042926] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:02.289 [2024-04-25 17:23:32.042933] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042936] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.042940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.042947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.042982] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043032] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.043038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.043042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043045] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.043051] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:02.289 [2024-04-25 17:23:32.043059] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043066] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043070] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043073] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.043080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.043099] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.043159] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.043162] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043166] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.043171] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043181] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043186] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.043197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.043215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043265] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.043271] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.043275] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043279] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.043283] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:02.289 [2024-04-25 17:23:32.043289] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043297] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043403] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:02.289 [2024-04-25 17:23:32.043416] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043425] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043429] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043433] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.043440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.043461] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043511] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.043522] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.043527] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043531] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.043536] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:02.289 [2024-04-25 17:23:32.043546] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043550] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043554] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.043561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.043580] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.289 [2024-04-25 17:23:32.043643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.289 [2024-04-25 17:23:32.043648] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043651] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.289 [2024-04-25 17:23:32.043656] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:02.289 [2024-04-25 17:23:32.043661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:02.289 [2024-04-25 17:23:32.043669] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:02.289 [2024-04-25 17:23:32.043682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:02.289 [2024-04-25 17:23:32.043694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.289 [2024-04-25 17:23:32.043732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.289 [2024-04-25 17:23:32.043756] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.289 [2024-04-25 17:23:32.043851] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.289 [2024-04-25 17:23:32.043857] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.289 [2024-04-25 17:23:32.043861] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.289 [2024-04-25 17:23:32.043873] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=4096, cccid=0 00:20:02.290 [2024-04-25 17:23:32.043877] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69a20) on tqpair(0xf21360): expected_datao=0, payload_size=4096 00:20:02.290 [2024-04-25 17:23:32.043882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043889] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043893] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.290 [2024-04-25 17:23:32.043907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.290 [2024-04-25 17:23:32.043911] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.290 [2024-04-25 17:23:32.043923] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:02.290 [2024-04-25 17:23:32.043928] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:02.290 [2024-04-25 17:23:32.043932] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:02.290 [2024-04-25 17:23:32.043936] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:02.290 [2024-04-25 17:23:32.043941] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:02.290 [2024-04-25 17:23:32.043946] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.043955] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.043963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043967] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.043971] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.043978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.290 [2024-04-25 17:23:32.043999] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.290 [2024-04-25 17:23:32.044054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.290 [2024-04-25 17:23:32.044061] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.290 [2024-04-25 17:23:32.044064] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044068] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a20) on tqpair=0xf21360 00:20:02.290 [2024-04-25 17:23:32.044075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044095] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044098] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.290 [2024-04-25 17:23:32.044111] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044118] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.290 [2024-04-25 17:23:32.044130] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044133] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044137] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.290 [2024-04-25 17:23:32.044148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044155] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.290 [2024-04-25 17:23:32.044165] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044185] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044189] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.290 [2024-04-25 17:23:32.044216] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a20, cid 0, qid 0 00:20:02.290 [2024-04-25 17:23:32.044223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69b80, cid 1, qid 0 00:20:02.290 [2024-04-25 17:23:32.044227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69ce0, cid 2, qid 0 00:20:02.290 [2024-04-25 17:23:32.044232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e40, cid 3, qid 0 00:20:02.290 [2024-04-25 17:23:32.044237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.290 [2024-04-25 17:23:32.044373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.290 [2024-04-25 17:23:32.044381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.290 [2024-04-25 17:23:32.044385] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044389] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.290 [2024-04-25 17:23:32.044394] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:02.290 [2024-04-25 17:23:32.044400] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044412] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044419] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044426] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044431] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044435] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:02.290 [2024-04-25 17:23:32.044463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.290 [2024-04-25 17:23:32.044520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.290 [2024-04-25 17:23:32.044526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.290 [2024-04-25 17:23:32.044530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044534] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.290 [2024-04-25 17:23:32.044583] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044609] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044617] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.290 [2024-04-25 17:23:32.044664] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.290 [2024-04-25 17:23:32.044728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.290 [2024-04-25 17:23:32.044735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.290 [2024-04-25 17:23:32.044738] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044742] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=4096, cccid=4 00:20:02.290 [2024-04-25 17:23:32.044746] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fa0) on tqpair(0xf21360): expected_datao=0, payload_size=4096 00:20:02.290 [2024-04-25 17:23:32.044750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044757] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044761] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.290 [2024-04-25 17:23:32.044787] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.290 [2024-04-25 17:23:32.044791] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044794] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.290 [2024-04-25 17:23:32.044805] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:02.290 [2024-04-25 17:23:32.044816] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044826] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:02.290 [2024-04-25 17:23:32.044834] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044838] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.290 [2024-04-25 17:23:32.044845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.290 [2024-04-25 17:23:32.044866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.290 [2024-04-25 17:23:32.044941] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.290 [2024-04-25 17:23:32.044947] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.290 [2024-04-25 17:23:32.044951] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044954] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=4096, cccid=4 00:20:02.290 [2024-04-25 17:23:32.044958] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fa0) on tqpair(0xf21360): expected_datao=0, payload_size=4096 00:20:02.290 [2024-04-25 17:23:32.044963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.290 [2024-04-25 17:23:32.044969] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.044973] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.044981] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.044986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.044989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.044993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045007] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045018] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045026] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045030] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045058] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.291 [2024-04-25 17:23:32.045129] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.291 [2024-04-25 17:23:32.045137] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.291 [2024-04-25 17:23:32.045140] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045144] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=4096, cccid=4 00:20:02.291 [2024-04-25 17:23:32.045148] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fa0) on tqpair(0xf21360): expected_datao=0, payload_size=4096 00:20:02.291 [2024-04-25 17:23:32.045152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045159] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045163] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045170] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045176] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045179] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045191] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045200] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045209] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045216] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045221] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045226] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:02.291 [2024-04-25 17:23:32.045230] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:02.291 [2024-04-25 17:23:32.045235] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:02.291 [2024-04-25 17:23:32.045249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045253] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:02.291 [2024-04-25 17:23:32.045303] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.291 [2024-04-25 17:23:32.045310] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a100, cid 5, qid 0 00:20:02.291 [2024-04-25 17:23:32.045378] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045384] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045388] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045403] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045406] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045410] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a100) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045420] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045449] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a100, cid 5, qid 0 00:20:02.291 [2024-04-25 17:23:32.045505] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045512] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045515] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045519] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a100) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045529] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045533] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a100, cid 5, qid 0 00:20:02.291 [2024-04-25 17:23:32.045609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045618] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045621] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045625] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a100) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a100, cid 5, qid 0 00:20:02.291 [2024-04-25 17:23:32.045744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.291 [2024-04-25 17:23:32.045768] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.291 [2024-04-25 17:23:32.045772] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045776] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a100) on tqpair=0xf21360 00:20:02.291 [2024-04-25 17:23:32.045789] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045809] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045827] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045845] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.045849] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf21360) 00:20:02.291 [2024-04-25 17:23:32.045855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.291 [2024-04-25 17:23:32.045877] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a100, cid 5, qid 0 00:20:02.291 [2024-04-25 17:23:32.045885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fa0, cid 4, qid 0 00:20:02.291 [2024-04-25 17:23:32.045889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a260, cid 6, qid 0 00:20:02.291 [2024-04-25 17:23:32.045894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a3c0, cid 7, qid 0 00:20:02.291 [2024-04-25 17:23:32.046026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.291 [2024-04-25 17:23:32.046033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.291 [2024-04-25 17:23:32.046037] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.046040] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=8192, cccid=5 00:20:02.291 [2024-04-25 17:23:32.046045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a100) on tqpair(0xf21360): expected_datao=0, payload_size=8192 00:20:02.291 [2024-04-25 17:23:32.046050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.046066] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.046070] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.291 [2024-04-25 17:23:32.046076] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.292 [2024-04-25 17:23:32.046096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.292 [2024-04-25 17:23:32.046100] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046118] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=512, cccid=4 00:20:02.292 [2024-04-25 17:23:32.046123] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fa0) on tqpair(0xf21360): expected_datao=0, payload_size=512 00:20:02.292 [2024-04-25 17:23:32.046127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046133] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046137] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046142] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.292 [2024-04-25 17:23:32.046147] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.292 [2024-04-25 17:23:32.046150] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046154] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=512, cccid=6 00:20:02.292 [2024-04-25 17:23:32.046158] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a260) on tqpair(0xf21360): expected_datao=0, payload_size=512 00:20:02.292 [2024-04-25 17:23:32.046162] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046168] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046171] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046176] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:02.292 [2024-04-25 17:23:32.046181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:02.292 [2024-04-25 17:23:32.046185] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046188] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf21360): datao=0, datal=4096, cccid=7 00:20:02.292 [2024-04-25 17:23:32.046192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a3c0) on tqpair(0xf21360): expected_datao=0, payload_size=4096 00:20:02.292 [2024-04-25 17:23:32.046196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046202] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046206] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046214] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.292 [2024-04-25 17:23:32.046219] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.292 [2024-04-25 17:23:32.046222] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.292 ===================================================== 00:20:02.292 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.292 ===================================================== 00:20:02.292 Controller Capabilities/Features 00:20:02.292 ================================ 00:20:02.292 Vendor ID: 8086 00:20:02.292 Subsystem Vendor ID: 8086 00:20:02.292 Serial Number: SPDK00000000000001 00:20:02.292 Model Number: SPDK bdev Controller 00:20:02.292 Firmware Version: 24.05 00:20:02.292 Recommended Arb Burst: 6 00:20:02.292 IEEE OUI Identifier: e4 d2 5c 00:20:02.292 Multi-path I/O 00:20:02.292 May have multiple subsystem ports: Yes 00:20:02.292 May have multiple controllers: Yes 00:20:02.292 Associated with SR-IOV VF: No 00:20:02.292 Max Data Transfer Size: 131072 00:20:02.292 Max Number of Namespaces: 32 00:20:02.292 Max Number of I/O Queues: 127 00:20:02.292 NVMe Specification Version (VS): 1.3 00:20:02.292 NVMe Specification Version (Identify): 1.3 00:20:02.292 Maximum Queue Entries: 128 00:20:02.292 Contiguous Queues Required: Yes 00:20:02.292 Arbitration Mechanisms Supported 00:20:02.292 Weighted Round Robin: Not Supported 00:20:02.292 Vendor Specific: Not Supported 00:20:02.292 Reset Timeout: 15000 ms 00:20:02.292 Doorbell Stride: 4 bytes 00:20:02.292 NVM Subsystem Reset: Not Supported 00:20:02.292 Command Sets Supported 00:20:02.292 NVM Command Set: Supported 00:20:02.292 Boot Partition: Not Supported 00:20:02.292 Memory Page Size Minimum: 4096 bytes 00:20:02.292 Memory Page Size Maximum: 4096 bytes 00:20:02.292 Persistent Memory Region: Not Supported 00:20:02.292 Optional Asynchronous Events Supported 00:20:02.292 Namespace Attribute Notices: Supported 00:20:02.292 Firmware Activation Notices: Not Supported 00:20:02.292 ANA Change Notices: Not Supported 00:20:02.292 PLE Aggregate Log Change Notices: Not Supported 00:20:02.292 LBA Status Info Alert Notices: Not Supported 00:20:02.292 EGE Aggregate Log Change Notices: Not Supported 00:20:02.292 Normal NVM Subsystem Shutdown event: Not Supported 00:20:02.292 Zone Descriptor Change Notices: Not Supported 00:20:02.292 Discovery Log Change Notices: Not Supported 00:20:02.292 Controller Attributes 00:20:02.292 128-bit Host Identifier: Supported 00:20:02.292 Non-Operational Permissive Mode: Not Supported 00:20:02.292 NVM Sets: Not Supported 00:20:02.292 Read Recovery Levels: Not Supported 00:20:02.292 Endurance Groups: Not Supported 00:20:02.292 Predictable Latency Mode: Not Supported 00:20:02.292 Traffic Based Keep ALive: Not Supported 00:20:02.292 Namespace Granularity: Not Supported 00:20:02.292 SQ Associations: Not Supported 00:20:02.292 UUID List: Not Supported 00:20:02.292 Multi-Domain Subsystem: Not Supported 00:20:02.292 Fixed Capacity Management: Not Supported 00:20:02.292 Variable Capacity Management: Not Supported 00:20:02.292 Delete Endurance Group: Not Supported 00:20:02.292 Delete NVM Set: Not Supported 00:20:02.292 Extended LBA Formats Supported: Not Supported 00:20:02.292 Flexible Data Placement Supported: Not Supported 00:20:02.292 00:20:02.292 Controller Memory Buffer Support 00:20:02.292 ================================ 00:20:02.292 Supported: No 00:20:02.292 00:20:02.292 Persistent Memory Region Support 00:20:02.292 ================================ 00:20:02.292 Supported: No 00:20:02.292 00:20:02.292 Admin Command Set Attributes 00:20:02.292 ============================ 00:20:02.292 Security Send/Receive: Not Supported 00:20:02.292 Format NVM: Not Supported 00:20:02.292 Firmware Activate/Download: Not Supported 00:20:02.292 Namespace Management: Not Supported 00:20:02.292 Device Self-Test: Not Supported 00:20:02.292 Directives: Not Supported 00:20:02.292 NVMe-MI: Not Supported 00:20:02.292 Virtualization Management: Not Supported 00:20:02.292 Doorbell Buffer Config: Not Supported 00:20:02.292 Get LBA Status Capability: Not Supported 00:20:02.292 Command & Feature Lockdown Capability: Not Supported 00:20:02.292 Abort Command Limit: 4 00:20:02.292 Async Event Request Limit: 4 00:20:02.292 Number of Firmware Slots: N/A 00:20:02.292 Firmware Slot 1 Read-Only: N/A 00:20:02.292 Firmware Activation Without Reset: [2024-04-25 17:23:32.046226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a100) on tqpair=0xf21360 00:20:02.292 [2024-04-25 17:23:32.046243] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.292 [2024-04-25 17:23:32.046249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.292 [2024-04-25 17:23:32.046252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046256] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fa0) on tqpair=0xf21360 00:20:02.292 [2024-04-25 17:23:32.046265] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.292 [2024-04-25 17:23:32.046271] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.292 [2024-04-25 17:23:32.046274] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046277] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a260) on tqpair=0xf21360 00:20:02.292 [2024-04-25 17:23:32.046284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.292 [2024-04-25 17:23:32.046290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.292 [2024-04-25 17:23:32.046293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.292 [2024-04-25 17:23:32.046297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a3c0) on tqpair=0xf21360 00:20:02.292 N/A 00:20:02.292 Multiple Update Detection Support: N/A 00:20:02.292 Firmware Update Granularity: No Information Provided 00:20:02.292 Per-Namespace SMART Log: No 00:20:02.292 Asymmetric Namespace Access Log Page: Not Supported 00:20:02.292 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:02.292 Command Effects Log Page: Supported 00:20:02.292 Get Log Page Extended Data: Supported 00:20:02.292 Telemetry Log Pages: Not Supported 00:20:02.292 Persistent Event Log Pages: Not Supported 00:20:02.292 Supported Log Pages Log Page: May Support 00:20:02.292 Commands Supported & Effects Log Page: Not Supported 00:20:02.292 Feature Identifiers & Effects Log Page:May Support 00:20:02.292 NVMe-MI Commands & Effects Log Page: May Support 00:20:02.292 Data Area 4 for Telemetry Log: Not Supported 00:20:02.292 Error Log Page Entries Supported: 128 00:20:02.292 Keep Alive: Supported 00:20:02.292 Keep Alive Granularity: 10000 ms 00:20:02.292 00:20:02.292 NVM Command Set Attributes 00:20:02.292 ========================== 00:20:02.292 Submission Queue Entry Size 00:20:02.292 Max: 64 00:20:02.292 Min: 64 00:20:02.292 Completion Queue Entry Size 00:20:02.292 Max: 16 00:20:02.292 Min: 16 00:20:02.292 Number of Namespaces: 32 00:20:02.292 Compare Command: Supported 00:20:02.292 Write Uncorrectable Command: Not Supported 00:20:02.292 Dataset Management Command: Supported 00:20:02.292 Write Zeroes Command: Supported 00:20:02.292 Set Features Save Field: Not Supported 00:20:02.292 Reservations: Supported 00:20:02.292 Timestamp: Not Supported 00:20:02.292 Copy: Supported 00:20:02.292 Volatile Write Cache: Present 00:20:02.293 Atomic Write Unit (Normal): 1 00:20:02.293 Atomic Write Unit (PFail): 1 00:20:02.293 Atomic Compare & Write Unit: 1 00:20:02.293 Fused Compare & Write: Supported 00:20:02.293 Scatter-Gather List 00:20:02.293 SGL Command Set: Supported 00:20:02.293 SGL Keyed: Supported 00:20:02.293 SGL Bit Bucket Descriptor: Not Supported 00:20:02.293 SGL Metadata Pointer: Not Supported 00:20:02.293 Oversized SGL: Not Supported 00:20:02.293 SGL Metadata Address: Not Supported 00:20:02.293 SGL Offset: Supported 00:20:02.293 Transport SGL Data Block: Not Supported 00:20:02.293 Replay Protected Memory Block: Not Supported 00:20:02.293 00:20:02.293 Firmware Slot Information 00:20:02.293 ========================= 00:20:02.293 Active slot: 1 00:20:02.293 Slot 1 Firmware Revision: 24.05 00:20:02.293 00:20:02.293 00:20:02.293 Commands Supported and Effects 00:20:02.293 ============================== 00:20:02.293 Admin Commands 00:20:02.293 -------------- 00:20:02.293 Get Log Page (02h): Supported 00:20:02.293 Identify (06h): Supported 00:20:02.293 Abort (08h): Supported 00:20:02.293 Set Features (09h): Supported 00:20:02.293 Get Features (0Ah): Supported 00:20:02.293 Asynchronous Event Request (0Ch): Supported 00:20:02.293 Keep Alive (18h): Supported 00:20:02.293 I/O Commands 00:20:02.293 ------------ 00:20:02.293 Flush (00h): Supported LBA-Change 00:20:02.293 Write (01h): Supported LBA-Change 00:20:02.293 Read (02h): Supported 00:20:02.293 Compare (05h): Supported 00:20:02.293 Write Zeroes (08h): Supported LBA-Change 00:20:02.293 Dataset Management (09h): Supported LBA-Change 00:20:02.293 Copy (19h): Supported LBA-Change 00:20:02.293 Unknown (79h): Supported LBA-Change 00:20:02.293 Unknown (7Ah): Supported 00:20:02.293 00:20:02.293 Error Log 00:20:02.293 ========= 00:20:02.293 00:20:02.293 Arbitration 00:20:02.293 =========== 00:20:02.293 Arbitration Burst: 1 00:20:02.293 00:20:02.293 Power Management 00:20:02.293 ================ 00:20:02.293 Number of Power States: 1 00:20:02.293 Current Power State: Power State #0 00:20:02.293 Power State #0: 00:20:02.293 Max Power: 0.00 W 00:20:02.293 Non-Operational State: Operational 00:20:02.293 Entry Latency: Not Reported 00:20:02.293 Exit Latency: Not Reported 00:20:02.293 Relative Read Throughput: 0 00:20:02.293 Relative Read Latency: 0 00:20:02.293 Relative Write Throughput: 0 00:20:02.293 Relative Write Latency: 0 00:20:02.293 Idle Power: Not Reported 00:20:02.293 Active Power: Not Reported 00:20:02.293 Non-Operational Permissive Mode: Not Supported 00:20:02.293 00:20:02.293 Health Information 00:20:02.293 ================== 00:20:02.293 Critical Warnings: 00:20:02.293 Available Spare Space: OK 00:20:02.293 Temperature: OK 00:20:02.293 Device Reliability: OK 00:20:02.293 Read Only: No 00:20:02.293 Volatile Memory Backup: OK 00:20:02.293 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:02.293 Temperature Threshold: [2024-04-25 17:23:32.046393] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046400] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf21360) 00:20:02.293 [2024-04-25 17:23:32.046407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.293 [2024-04-25 17:23:32.046430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a3c0, cid 7, qid 0 00:20:02.293 [2024-04-25 17:23:32.046492] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.293 [2024-04-25 17:23:32.046498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.293 [2024-04-25 17:23:32.046501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046505] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a3c0) on tqpair=0xf21360 00:20:02.293 [2024-04-25 17:23:32.046535] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:02.293 [2024-04-25 17:23:32.046547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.293 [2024-04-25 17:23:32.046554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.293 [2024-04-25 17:23:32.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.293 [2024-04-25 17:23:32.046566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.293 [2024-04-25 17:23:32.046574] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046578] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046582] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf21360) 00:20:02.293 [2024-04-25 17:23:32.046589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.293 [2024-04-25 17:23:32.046611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e40, cid 3, qid 0 00:20:02.293 [2024-04-25 17:23:32.046659] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.293 [2024-04-25 17:23:32.046665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.293 [2024-04-25 17:23:32.046669] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e40) on tqpair=0xf21360 00:20:02.293 [2024-04-25 17:23:32.046680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046684] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.046687] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf21360) 00:20:02.293 [2024-04-25 17:23:32.046694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.293 [2024-04-25 17:23:32.046716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e40, cid 3, qid 0 00:20:02.293 [2024-04-25 17:23:32.050749] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.293 [2024-04-25 17:23:32.050771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.293 [2024-04-25 17:23:32.050776] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.050781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e40) on tqpair=0xf21360 00:20:02.293 [2024-04-25 17:23:32.050786] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:02.293 [2024-04-25 17:23:32.050791] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:02.293 [2024-04-25 17:23:32.050804] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.050810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.050814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf21360) 00:20:02.293 [2024-04-25 17:23:32.051134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.293 [2024-04-25 17:23:32.051188] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e40, cid 3, qid 0 00:20:02.293 [2024-04-25 17:23:32.051262] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:02.293 [2024-04-25 17:23:32.051270] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:02.293 [2024-04-25 17:23:32.051274] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:02.293 [2024-04-25 17:23:32.051278] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e40) on tqpair=0xf21360 00:20:02.293 [2024-04-25 17:23:32.051288] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:02.293 0 Kelvin (-273 Celsius) 00:20:02.293 Available Spare: 0% 00:20:02.293 Available Spare Threshold: 0% 00:20:02.293 Life Percentage Used: 0% 00:20:02.293 Data Units Read: 0 00:20:02.293 Data Units Written: 0 00:20:02.293 Host Read Commands: 0 00:20:02.294 Host Write Commands: 0 00:20:02.294 Controller Busy Time: 0 minutes 00:20:02.294 Power Cycles: 0 00:20:02.294 Power On Hours: 0 hours 00:20:02.294 Unsafe Shutdowns: 0 00:20:02.294 Unrecoverable Media Errors: 0 00:20:02.294 Lifetime Error Log Entries: 0 00:20:02.294 Warning Temperature Time: 0 minutes 00:20:02.294 Critical Temperature Time: 0 minutes 00:20:02.294 00:20:02.294 Number of Queues 00:20:02.294 ================ 00:20:02.294 Number of I/O Submission Queues: 127 00:20:02.294 Number of I/O Completion Queues: 127 00:20:02.294 00:20:02.294 Active Namespaces 00:20:02.294 ================= 00:20:02.294 Namespace ID:1 00:20:02.294 Error Recovery Timeout: Unlimited 00:20:02.294 Command Set Identifier: NVM (00h) 00:20:02.294 Deallocate: Supported 00:20:02.294 Deallocated/Unwritten Error: Not Supported 00:20:02.294 Deallocated Read Value: Unknown 00:20:02.294 Deallocate in Write Zeroes: Not Supported 00:20:02.294 Deallocated Guard Field: 0xFFFF 00:20:02.294 Flush: Supported 00:20:02.294 Reservation: Supported 00:20:02.294 Namespace Sharing Capabilities: Multiple Controllers 00:20:02.294 Size (in LBAs): 131072 (0GiB) 00:20:02.294 Capacity (in LBAs): 131072 (0GiB) 00:20:02.294 Utilization (in LBAs): 131072 (0GiB) 00:20:02.294 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:02.294 EUI64: ABCDEF0123456789 00:20:02.294 UUID: dfa0e075-0c40-48a4-9d7d-5acc52e099f7 00:20:02.294 Thin Provisioning: Not Supported 00:20:02.294 Per-NS Atomic Units: Yes 00:20:02.294 Atomic Boundary Size (Normal): 0 00:20:02.294 Atomic Boundary Size (PFail): 0 00:20:02.294 Atomic Boundary Offset: 0 00:20:02.294 Maximum Single Source Range Length: 65535 00:20:02.294 Maximum Copy Length: 65535 00:20:02.294 Maximum Source Range Count: 1 00:20:02.294 NGUID/EUI64 Never Reused: No 00:20:02.294 Namespace Write Protected: No 00:20:02.294 Number of LBA Formats: 1 00:20:02.294 Current LBA Format: LBA Format #00 00:20:02.294 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:02.294 00:20:02.294 17:23:32 -- host/identify.sh@51 -- # sync 00:20:02.294 17:23:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.294 17:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.294 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.294 17:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.294 17:23:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:02.294 17:23:32 -- host/identify.sh@56 -- # nvmftestfini 00:20:02.294 17:23:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.294 17:23:32 -- nvmf/common.sh@117 -- # sync 00:20:02.294 17:23:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.294 17:23:32 -- nvmf/common.sh@120 -- # set +e 00:20:02.294 17:23:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.294 17:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.294 rmmod nvme_tcp 00:20:02.294 rmmod nvme_fabrics 00:20:02.294 rmmod nvme_keyring 00:20:02.294 17:23:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.294 17:23:32 -- nvmf/common.sh@124 -- # set -e 00:20:02.294 17:23:32 -- nvmf/common.sh@125 -- # return 0 00:20:02.294 17:23:32 -- nvmf/common.sh@478 -- # '[' -n 86839 ']' 00:20:02.294 17:23:32 -- nvmf/common.sh@479 -- # killprocess 86839 00:20:02.294 17:23:32 -- common/autotest_common.sh@936 -- # '[' -z 86839 ']' 00:20:02.294 17:23:32 -- common/autotest_common.sh@940 -- # kill -0 86839 00:20:02.294 17:23:32 -- common/autotest_common.sh@941 -- # uname 00:20:02.294 17:23:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.294 17:23:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86839 00:20:02.294 killing process with pid 86839 00:20:02.294 17:23:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.294 17:23:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.294 17:23:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86839' 00:20:02.294 17:23:32 -- common/autotest_common.sh@955 -- # kill 86839 00:20:02.294 [2024-04-25 17:23:32.200423] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:02.294 17:23:32 -- common/autotest_common.sh@960 -- # wait 86839 00:20:02.553 17:23:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.553 17:23:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:02.553 17:23:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:02.553 17:23:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.553 17:23:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.553 17:23:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.553 17:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.553 17:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.553 17:23:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:02.553 00:20:02.553 real 0m2.514s 00:20:02.553 user 0m7.142s 00:20:02.553 sys 0m0.609s 00:20:02.553 ************************************ 00:20:02.553 END TEST nvmf_identify 00:20:02.553 ************************************ 00:20:02.553 17:23:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.553 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.553 17:23:32 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:02.553 17:23:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:02.553 17:23:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.553 17:23:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.813 ************************************ 00:20:02.813 START TEST nvmf_perf 00:20:02.813 ************************************ 00:20:02.813 17:23:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:02.813 * Looking for test storage... 00:20:02.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:02.813 17:23:32 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.813 17:23:32 -- nvmf/common.sh@7 -- # uname -s 00:20:02.813 17:23:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.813 17:23:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.813 17:23:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.813 17:23:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.813 17:23:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.813 17:23:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.813 17:23:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.813 17:23:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.813 17:23:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.813 17:23:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:02.813 17:23:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:02.813 17:23:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.813 17:23:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.813 17:23:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.813 17:23:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.813 17:23:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.813 17:23:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.813 17:23:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.813 17:23:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.813 17:23:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.813 17:23:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.813 17:23:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.813 17:23:32 -- paths/export.sh@5 -- # export PATH 00:20:02.813 17:23:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.813 17:23:32 -- nvmf/common.sh@47 -- # : 0 00:20:02.813 17:23:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.813 17:23:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.813 17:23:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.813 17:23:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.813 17:23:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.813 17:23:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.813 17:23:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.813 17:23:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.813 17:23:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:02.813 17:23:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:02.813 17:23:32 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:02.813 17:23:32 -- host/perf.sh@17 -- # nvmftestinit 00:20:02.813 17:23:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:02.813 17:23:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.813 17:23:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:02.813 17:23:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:02.813 17:23:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:02.813 17:23:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.813 17:23:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.813 17:23:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.813 17:23:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:02.813 17:23:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:02.813 17:23:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.813 17:23:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.813 17:23:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:02.813 17:23:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:02.813 17:23:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.813 17:23:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.813 17:23:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.813 17:23:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.813 17:23:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.813 17:23:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.813 17:23:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.813 17:23:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.813 17:23:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:02.813 17:23:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:02.813 Cannot find device "nvmf_tgt_br" 00:20:02.813 17:23:32 -- nvmf/common.sh@155 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.813 Cannot find device "nvmf_tgt_br2" 00:20:02.813 17:23:32 -- nvmf/common.sh@156 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:02.813 17:23:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:02.813 Cannot find device "nvmf_tgt_br" 00:20:02.813 17:23:32 -- nvmf/common.sh@158 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:02.813 Cannot find device "nvmf_tgt_br2" 00:20:02.813 17:23:32 -- nvmf/common.sh@159 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:02.813 17:23:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:02.813 17:23:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.813 17:23:32 -- nvmf/common.sh@162 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.813 17:23:32 -- nvmf/common.sh@163 -- # true 00:20:02.813 17:23:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.813 17:23:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.072 17:23:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.072 17:23:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.072 17:23:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.072 17:23:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.072 17:23:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.072 17:23:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:03.072 17:23:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:03.072 17:23:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:03.072 17:23:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:03.072 17:23:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:03.072 17:23:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:03.072 17:23:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.072 17:23:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.072 17:23:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.072 17:23:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:03.072 17:23:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:03.072 17:23:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.072 17:23:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.072 17:23:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.072 17:23:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.072 17:23:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.072 17:23:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:03.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:03.072 00:20:03.072 --- 10.0.0.2 ping statistics --- 00:20:03.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.072 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:03.072 17:23:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:03.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:20:03.072 00:20:03.072 --- 10.0.0.3 ping statistics --- 00:20:03.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.072 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:03.072 17:23:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:03.072 00:20:03.072 --- 10.0.0.1 ping statistics --- 00:20:03.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.072 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:03.072 17:23:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.072 17:23:32 -- nvmf/common.sh@422 -- # return 0 00:20:03.072 17:23:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:03.072 17:23:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.072 17:23:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:03.072 17:23:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:03.072 17:23:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.072 17:23:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:03.072 17:23:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:03.072 17:23:33 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:03.072 17:23:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:03.073 17:23:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:03.073 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 17:23:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.073 17:23:33 -- nvmf/common.sh@470 -- # nvmfpid=87069 00:20:03.073 17:23:33 -- nvmf/common.sh@471 -- # waitforlisten 87069 00:20:03.073 17:23:33 -- common/autotest_common.sh@817 -- # '[' -z 87069 ']' 00:20:03.073 17:23:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.073 17:23:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.073 17:23:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.073 17:23:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.073 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.331 [2024-04-25 17:23:33.076103] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:03.331 [2024-04-25 17:23:33.076183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.331 [2024-04-25 17:23:33.214521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.331 [2024-04-25 17:23:33.264202] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.331 [2024-04-25 17:23:33.264250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.331 [2024-04-25 17:23:33.264286] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.331 [2024-04-25 17:23:33.264294] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.331 [2024-04-25 17:23:33.264300] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.331 [2024-04-25 17:23:33.264470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.331 [2024-04-25 17:23:33.264904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.331 [2024-04-25 17:23:33.265027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.331 [2024-04-25 17:23:33.265035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.268 17:23:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.268 17:23:33 -- common/autotest_common.sh@850 -- # return 0 00:20:04.268 17:23:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.268 17:23:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.268 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:20:04.268 17:23:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.268 17:23:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:04.268 17:23:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:04.527 17:23:34 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:04.527 17:23:34 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:04.786 17:23:34 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:04.786 17:23:34 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.045 17:23:34 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:05.045 17:23:34 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:05.045 17:23:34 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:05.045 17:23:34 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:05.045 17:23:34 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.304 [2024-04-25 17:23:35.079319] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.304 17:23:35 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.563 17:23:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.563 17:23:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.563 17:23:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:05.563 17:23:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:05.822 17:23:35 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.080 [2024-04-25 17:23:35.916393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.080 17:23:35 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:06.339 17:23:36 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:06.339 17:23:36 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:06.339 17:23:36 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:06.339 17:23:36 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:07.273 Initializing NVMe Controllers 00:20:07.273 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:07.273 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:07.273 Initialization complete. Launching workers. 00:20:07.273 ======================================================== 00:20:07.273 Latency(us) 00:20:07.273 Device Information : IOPS MiB/s Average min max 00:20:07.273 PCIE (0000:00:10.0) NSID 1 from core 0: 22939.43 89.61 1395.10 397.45 7834.51 00:20:07.273 ======================================================== 00:20:07.273 Total : 22939.43 89.61 1395.10 397.45 7834.51 00:20:07.273 00:20:07.532 17:23:37 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:08.941 Initializing NVMe Controllers 00:20:08.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:08.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:08.941 Initialization complete. Launching workers. 00:20:08.941 ======================================================== 00:20:08.941 Latency(us) 00:20:08.941 Device Information : IOPS MiB/s Average min max 00:20:08.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3828.00 14.95 260.94 101.05 5164.69 00:20:08.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8104.63 5959.20 12004.66 00:20:08.941 ======================================================== 00:20:08.941 Total : 3952.00 15.44 507.05 101.05 12004.66 00:20:08.941 00:20:08.941 17:23:38 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.321 Initializing NVMe Controllers 00:20:10.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:10.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:10.321 Initialization complete. Launching workers. 00:20:10.321 ======================================================== 00:20:10.321 Latency(us) 00:20:10.321 Device Information : IOPS MiB/s Average min max 00:20:10.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9751.39 38.09 3283.05 697.12 7833.14 00:20:10.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2697.83 10.54 11958.01 6392.24 21204.21 00:20:10.321 ======================================================== 00:20:10.321 Total : 12449.22 48.63 5162.97 697.12 21204.21 00:20:10.321 00:20:10.321 17:23:39 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:10.321 17:23:39 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:13.022 Initializing NVMe Controllers 00:20:13.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.022 Controller IO queue size 128, less than required. 00:20:13.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.022 Controller IO queue size 128, less than required. 00:20:13.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:13.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:13.022 Initialization complete. Launching workers. 00:20:13.022 ======================================================== 00:20:13.022 Latency(us) 00:20:13.022 Device Information : IOPS MiB/s Average min max 00:20:13.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.42 443.36 73413.60 46812.05 115763.19 00:20:13.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.47 148.37 224068.02 68204.50 383300.51 00:20:13.022 ======================================================== 00:20:13.022 Total : 2366.89 591.72 111188.59 46812.05 383300.51 00:20:13.022 00:20:13.022 17:23:42 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:13.022 No valid NVMe controllers or AIO or URING devices found 00:20:13.022 Initializing NVMe Controllers 00:20:13.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.022 Controller IO queue size 128, less than required. 00:20:13.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:13.022 Controller IO queue size 128, less than required. 00:20:13.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:13.022 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:13.022 WARNING: Some requested NVMe devices were skipped 00:20:13.022 17:23:42 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:15.556 Initializing NVMe Controllers 00:20:15.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.556 Controller IO queue size 128, less than required. 00:20:15.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.556 Controller IO queue size 128, less than required. 00:20:15.556 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:15.556 Initialization complete. Launching workers. 00:20:15.556 00:20:15.556 ==================== 00:20:15.556 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:15.556 TCP transport: 00:20:15.556 polls: 8567 00:20:15.556 idle_polls: 4958 00:20:15.556 sock_completions: 3609 00:20:15.556 nvme_completions: 4775 00:20:15.556 submitted_requests: 7174 00:20:15.556 queued_requests: 1 00:20:15.556 00:20:15.556 ==================== 00:20:15.556 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:15.556 TCP transport: 00:20:15.556 polls: 11385 00:20:15.556 idle_polls: 7935 00:20:15.556 sock_completions: 3450 00:20:15.556 nvme_completions: 7009 00:20:15.556 submitted_requests: 10554 00:20:15.556 queued_requests: 1 00:20:15.556 ======================================================== 00:20:15.556 Latency(us) 00:20:15.556 Device Information : IOPS MiB/s Average min max 00:20:15.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.41 298.35 109417.25 65255.85 176061.76 00:20:15.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1751.87 437.97 74176.29 27537.81 114202.17 00:20:15.556 ======================================================== 00:20:15.556 Total : 2945.27 736.32 88455.73 27537.81 176061.76 00:20:15.556 00:20:15.556 17:23:45 -- host/perf.sh@66 -- # sync 00:20:15.556 17:23:45 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.556 17:23:45 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:15.556 17:23:45 -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:15.556 17:23:45 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:15.815 17:23:45 -- host/perf.sh@72 -- # ls_guid=6191fcfe-c9a3-4997-9f1f-c255c6794c6b 00:20:15.815 17:23:45 -- host/perf.sh@73 -- # get_lvs_free_mb 6191fcfe-c9a3-4997-9f1f-c255c6794c6b 00:20:15.815 17:23:45 -- common/autotest_common.sh@1350 -- # local lvs_uuid=6191fcfe-c9a3-4997-9f1f-c255c6794c6b 00:20:15.815 17:23:45 -- common/autotest_common.sh@1351 -- # local lvs_info 00:20:15.815 17:23:45 -- common/autotest_common.sh@1352 -- # local fc 00:20:15.815 17:23:45 -- common/autotest_common.sh@1353 -- # local cs 00:20:15.815 17:23:45 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:16.074 17:23:46 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:20:16.074 { 00:20:16.074 "base_bdev": "Nvme0n1", 00:20:16.074 "block_size": 4096, 00:20:16.074 "cluster_size": 4194304, 00:20:16.074 "free_clusters": 1278, 00:20:16.074 "name": "lvs_0", 00:20:16.074 "total_data_clusters": 1278, 00:20:16.074 "uuid": "6191fcfe-c9a3-4997-9f1f-c255c6794c6b" 00:20:16.074 } 00:20:16.074 ]' 00:20:16.074 17:23:46 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="6191fcfe-c9a3-4997-9f1f-c255c6794c6b") .free_clusters' 00:20:16.333 17:23:46 -- common/autotest_common.sh@1355 -- # fc=1278 00:20:16.333 17:23:46 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="6191fcfe-c9a3-4997-9f1f-c255c6794c6b") .cluster_size' 00:20:16.333 5112 00:20:16.333 17:23:46 -- common/autotest_common.sh@1356 -- # cs=4194304 00:20:16.333 17:23:46 -- common/autotest_common.sh@1359 -- # free_mb=5112 00:20:16.333 17:23:46 -- common/autotest_common.sh@1360 -- # echo 5112 00:20:16.333 17:23:46 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:16.333 17:23:46 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6191fcfe-c9a3-4997-9f1f-c255c6794c6b lbd_0 5112 00:20:16.592 17:23:46 -- host/perf.sh@80 -- # lb_guid=95ff90de-24fd-4d5d-87b0-61169fb07f62 00:20:16.592 17:23:46 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 95ff90de-24fd-4d5d-87b0-61169fb07f62 lvs_n_0 00:20:16.851 17:23:46 -- host/perf.sh@83 -- # ls_nested_guid=3e264c96-abfe-4da4-8de4-4f7eb6ba211a 00:20:16.851 17:23:46 -- host/perf.sh@84 -- # get_lvs_free_mb 3e264c96-abfe-4da4-8de4-4f7eb6ba211a 00:20:16.851 17:23:46 -- common/autotest_common.sh@1350 -- # local lvs_uuid=3e264c96-abfe-4da4-8de4-4f7eb6ba211a 00:20:16.851 17:23:46 -- common/autotest_common.sh@1351 -- # local lvs_info 00:20:16.851 17:23:46 -- common/autotest_common.sh@1352 -- # local fc 00:20:16.851 17:23:46 -- common/autotest_common.sh@1353 -- # local cs 00:20:16.851 17:23:46 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:17.111 17:23:46 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:20:17.111 { 00:20:17.111 "base_bdev": "Nvme0n1", 00:20:17.111 "block_size": 4096, 00:20:17.111 "cluster_size": 4194304, 00:20:17.111 "free_clusters": 0, 00:20:17.111 "name": "lvs_0", 00:20:17.111 "total_data_clusters": 1278, 00:20:17.111 "uuid": "6191fcfe-c9a3-4997-9f1f-c255c6794c6b" 00:20:17.111 }, 00:20:17.111 { 00:20:17.111 "base_bdev": "95ff90de-24fd-4d5d-87b0-61169fb07f62", 00:20:17.111 "block_size": 4096, 00:20:17.111 "cluster_size": 4194304, 00:20:17.111 "free_clusters": 1276, 00:20:17.111 "name": "lvs_n_0", 00:20:17.111 "total_data_clusters": 1276, 00:20:17.111 "uuid": "3e264c96-abfe-4da4-8de4-4f7eb6ba211a" 00:20:17.111 } 00:20:17.111 ]' 00:20:17.111 17:23:46 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="3e264c96-abfe-4da4-8de4-4f7eb6ba211a") .free_clusters' 00:20:17.111 17:23:47 -- common/autotest_common.sh@1355 -- # fc=1276 00:20:17.111 17:23:47 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="3e264c96-abfe-4da4-8de4-4f7eb6ba211a") .cluster_size' 00:20:17.111 5104 00:20:17.111 17:23:47 -- common/autotest_common.sh@1356 -- # cs=4194304 00:20:17.111 17:23:47 -- common/autotest_common.sh@1359 -- # free_mb=5104 00:20:17.111 17:23:47 -- common/autotest_common.sh@1360 -- # echo 5104 00:20:17.111 17:23:47 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:17.111 17:23:47 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3e264c96-abfe-4da4-8de4-4f7eb6ba211a lbd_nest_0 5104 00:20:17.370 17:23:47 -- host/perf.sh@88 -- # lb_nested_guid=56907233-634f-4808-a3dd-69cf28fd06b9 00:20:17.370 17:23:47 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.629 17:23:47 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:17.629 17:23:47 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 56907233-634f-4808-a3dd-69cf28fd06b9 00:20:17.887 17:23:47 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.147 17:23:47 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:18.147 17:23:47 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:18.147 17:23:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:18.147 17:23:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:18.147 17:23:47 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:18.406 No valid NVMe controllers or AIO or URING devices found 00:20:18.406 Initializing NVMe Controllers 00:20:18.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.406 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:18.406 WARNING: Some requested NVMe devices were skipped 00:20:18.406 17:23:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:18.406 17:23:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.616 Initializing NVMe Controllers 00:20:30.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.616 Initialization complete. Launching workers. 00:20:30.616 ======================================================== 00:20:30.616 Latency(us) 00:20:30.616 Device Information : IOPS MiB/s Average min max 00:20:30.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.10 121.26 1030.47 328.94 7971.95 00:20:30.616 ======================================================== 00:20:30.616 Total : 970.10 121.26 1030.47 328.94 7971.95 00:20:30.616 00:20:30.616 17:23:58 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:30.616 17:23:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.616 17:23:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.616 No valid NVMe controllers or AIO or URING devices found 00:20:30.616 Initializing NVMe Controllers 00:20:30.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.616 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:30.616 WARNING: Some requested NVMe devices were skipped 00:20:30.616 17:23:58 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:30.616 17:23:58 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.597 [2024-04-25 17:24:09.070736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17ac0 is same with the state(5) to be set 00:20:40.597 [2024-04-25 17:24:09.070808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17ac0 is same with the state(5) to be set 00:20:40.597 [2024-04-25 17:24:09.070834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17ac0 is same with the state(5) to be set 00:20:40.597 Initializing NVMe Controllers 00:20:40.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.597 Initialization complete. Launching workers. 00:20:40.597 ======================================================== 00:20:40.597 Latency(us) 00:20:40.597 Device Information : IOPS MiB/s Average min max 00:20:40.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1072.60 134.07 29861.16 7523.28 255013.84 00:20:40.597 ======================================================== 00:20:40.597 Total : 1072.60 134.07 29861.16 7523.28 255013.84 00:20:40.597 00:20:40.597 17:24:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:40.597 17:24:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.597 17:24:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.597 No valid NVMe controllers or AIO or URING devices found 00:20:40.597 Initializing NVMe Controllers 00:20:40.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.597 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:40.597 WARNING: Some requested NVMe devices were skipped 00:20:40.597 17:24:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.597 17:24:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.573 Initializing NVMe Controllers 00:20:50.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.573 Controller IO queue size 128, less than required. 00:20:50.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.573 Initialization complete. Launching workers. 00:20:50.573 ======================================================== 00:20:50.573 Latency(us) 00:20:50.573 Device Information : IOPS MiB/s Average min max 00:20:50.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4171.63 521.45 30687.38 10008.85 66810.20 00:20:50.573 ======================================================== 00:20:50.573 Total : 4171.63 521.45 30687.38 10008.85 66810.20 00:20:50.573 00:20:50.573 17:24:19 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.573 17:24:20 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 56907233-634f-4808-a3dd-69cf28fd06b9 00:20:50.573 17:24:20 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:50.831 17:24:20 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 95ff90de-24fd-4d5d-87b0-61169fb07f62 00:20:51.089 17:24:20 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:51.089 17:24:21 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:51.089 17:24:21 -- host/perf.sh@114 -- # nvmftestfini 00:20:51.089 17:24:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:51.089 17:24:21 -- nvmf/common.sh@117 -- # sync 00:20:51.089 17:24:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.089 17:24:21 -- nvmf/common.sh@120 -- # set +e 00:20:51.089 17:24:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.089 17:24:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.089 rmmod nvme_tcp 00:20:51.089 rmmod nvme_fabrics 00:20:51.089 rmmod nvme_keyring 00:20:51.347 17:24:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.347 17:24:21 -- nvmf/common.sh@124 -- # set -e 00:20:51.347 17:24:21 -- nvmf/common.sh@125 -- # return 0 00:20:51.347 17:24:21 -- nvmf/common.sh@478 -- # '[' -n 87069 ']' 00:20:51.347 17:24:21 -- nvmf/common.sh@479 -- # killprocess 87069 00:20:51.347 17:24:21 -- common/autotest_common.sh@936 -- # '[' -z 87069 ']' 00:20:51.347 17:24:21 -- common/autotest_common.sh@940 -- # kill -0 87069 00:20:51.347 17:24:21 -- common/autotest_common.sh@941 -- # uname 00:20:51.347 17:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.347 17:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87069 00:20:51.347 killing process with pid 87069 00:20:51.347 17:24:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:51.347 17:24:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:51.347 17:24:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87069' 00:20:51.347 17:24:21 -- common/autotest_common.sh@955 -- # kill 87069 00:20:51.347 17:24:21 -- common/autotest_common.sh@960 -- # wait 87069 00:20:52.723 17:24:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.723 17:24:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:52.723 17:24:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:52.723 17:24:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.723 17:24:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.723 17:24:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.723 17:24:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.723 17:24:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.723 17:24:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:52.723 00:20:52.723 real 0m49.868s 00:20:52.723 user 3m7.560s 00:20:52.723 sys 0m10.250s 00:20:52.723 17:24:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.723 ************************************ 00:20:52.723 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.723 END TEST nvmf_perf 00:20:52.723 ************************************ 00:20:52.723 17:24:22 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:52.723 17:24:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:52.723 17:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.723 17:24:22 -- common/autotest_common.sh@10 -- # set +x 00:20:52.723 ************************************ 00:20:52.723 START TEST nvmf_fio_host 00:20:52.723 ************************************ 00:20:52.723 17:24:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:52.723 * Looking for test storage... 00:20:52.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:52.723 17:24:22 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.723 17:24:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.723 17:24:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.723 17:24:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.723 17:24:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@5 -- # export PATH 00:20:52.723 17:24:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.723 17:24:22 -- nvmf/common.sh@7 -- # uname -s 00:20:52.723 17:24:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.723 17:24:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.723 17:24:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.723 17:24:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.723 17:24:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.723 17:24:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.723 17:24:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.723 17:24:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.723 17:24:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.723 17:24:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.723 17:24:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:52.723 17:24:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:20:52.723 17:24:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.723 17:24:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.723 17:24:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:52.723 17:24:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.723 17:24:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.723 17:24:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.723 17:24:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.723 17:24:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.723 17:24:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- paths/export.sh@5 -- # export PATH 00:20:52.723 17:24:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.723 17:24:22 -- nvmf/common.sh@47 -- # : 0 00:20:52.724 17:24:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.724 17:24:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.724 17:24:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.724 17:24:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.724 17:24:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.724 17:24:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.724 17:24:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.724 17:24:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.724 17:24:22 -- host/fio.sh@12 -- # nvmftestinit 00:20:52.724 17:24:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:52.724 17:24:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.724 17:24:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:52.724 17:24:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:52.724 17:24:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:52.724 17:24:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.724 17:24:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.724 17:24:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.724 17:24:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:52.724 17:24:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:52.724 17:24:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:52.724 17:24:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:52.724 17:24:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:52.724 17:24:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:52.724 17:24:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.724 17:24:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.724 17:24:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:52.724 17:24:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:52.724 17:24:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:52.724 17:24:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:52.724 17:24:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:52.724 17:24:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.724 17:24:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:52.724 17:24:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:52.724 17:24:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:52.724 17:24:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:52.724 17:24:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:52.724 17:24:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:52.724 Cannot find device "nvmf_tgt_br" 00:20:52.724 17:24:22 -- nvmf/common.sh@155 -- # true 00:20:52.724 17:24:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:52.724 Cannot find device "nvmf_tgt_br2" 00:20:52.724 17:24:22 -- nvmf/common.sh@156 -- # true 00:20:52.724 17:24:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:52.724 17:24:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:52.724 Cannot find device "nvmf_tgt_br" 00:20:52.724 17:24:22 -- nvmf/common.sh@158 -- # true 00:20:52.724 17:24:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:52.983 Cannot find device "nvmf_tgt_br2" 00:20:52.983 17:24:22 -- nvmf/common.sh@159 -- # true 00:20:52.983 17:24:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:52.983 17:24:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:52.983 17:24:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:52.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.983 17:24:22 -- nvmf/common.sh@162 -- # true 00:20:52.983 17:24:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:52.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.983 17:24:22 -- nvmf/common.sh@163 -- # true 00:20:52.983 17:24:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:52.983 17:24:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:52.983 17:24:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:52.983 17:24:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:52.983 17:24:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:52.983 17:24:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:52.983 17:24:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:52.983 17:24:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:52.983 17:24:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:52.983 17:24:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:52.983 17:24:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:52.983 17:24:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:52.983 17:24:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:52.983 17:24:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:52.983 17:24:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:52.983 17:24:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:52.983 17:24:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:52.983 17:24:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:52.983 17:24:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:52.983 17:24:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:52.983 17:24:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:52.983 17:24:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:52.983 17:24:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:52.983 17:24:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:53.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:53.260 00:20:53.260 --- 10.0.0.2 ping statistics --- 00:20:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.260 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:53.260 17:24:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:53.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:53.260 00:20:53.260 --- 10.0.0.3 ping statistics --- 00:20:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.260 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:53.260 17:24:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:53.260 00:20:53.260 --- 10.0.0.1 ping statistics --- 00:20:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.260 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:53.260 17:24:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.260 17:24:22 -- nvmf/common.sh@422 -- # return 0 00:20:53.260 17:24:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:53.260 17:24:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.260 17:24:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:53.260 17:24:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:53.260 17:24:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.260 17:24:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:53.260 17:24:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:53.260 17:24:23 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:53.260 17:24:23 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:53.260 17:24:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:53.260 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.260 17:24:23 -- host/fio.sh@22 -- # nvmfpid=88020 00:20:53.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.260 17:24:23 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.260 17:24:23 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.260 17:24:23 -- host/fio.sh@26 -- # waitforlisten 88020 00:20:53.260 17:24:23 -- common/autotest_common.sh@817 -- # '[' -z 88020 ']' 00:20:53.260 17:24:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.260 17:24:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:53.260 17:24:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.260 17:24:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:53.260 17:24:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.261 [2024-04-25 17:24:23.067267] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:20:53.261 [2024-04-25 17:24:23.067348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.261 [2024-04-25 17:24:23.207865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.555 [2024-04-25 17:24:23.278512] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.555 [2024-04-25 17:24:23.278754] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.555 [2024-04-25 17:24:23.278923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.555 [2024-04-25 17:24:23.279095] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.555 [2024-04-25 17:24:23.279139] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.555 [2024-04-25 17:24:23.279411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.555 [2024-04-25 17:24:23.279547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.555 [2024-04-25 17:24:23.280151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.555 [2024-04-25 17:24:23.280188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.121 17:24:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:54.121 17:24:24 -- common/autotest_common.sh@850 -- # return 0 00:20:54.121 17:24:24 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.121 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.121 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.121 [2024-04-25 17:24:24.085300] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.379 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:54.380 17:24:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 17:24:24 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.380 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 Malloc1 00:20:54.380 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.380 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:54.380 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.380 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 [2024-04-25 17:24:24.184684] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.380 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:54.380 17:24:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:54.380 17:24:24 -- common/autotest_common.sh@10 -- # set +x 00:20:54.380 17:24:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:54.380 17:24:24 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:54.380 17:24:24 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.380 17:24:24 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.380 17:24:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:54.380 17:24:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.380 17:24:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:54.380 17:24:24 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.380 17:24:24 -- common/autotest_common.sh@1327 -- # shift 00:20:54.380 17:24:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:54.380 17:24:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:54.380 17:24:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:54.380 17:24:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:54.380 17:24:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:54.380 17:24:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:54.380 17:24:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:54.380 17:24:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:54.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:54.638 fio-3.35 00:20:54.638 Starting 1 thread 00:20:57.169 00:20:57.169 test: (groupid=0, jobs=1): err= 0: pid=88099: Thu Apr 25 17:24:26 2024 00:20:57.169 read: IOPS=9394, BW=36.7MiB/s (38.5MB/s)(73.6MiB/2006msec) 00:20:57.169 slat (nsec): min=1998, max=325487, avg=2609.83, stdev=3385.66 00:20:57.169 clat (usec): min=3222, max=12563, avg=7096.15, stdev=561.62 00:20:57.169 lat (usec): min=3255, max=12566, avg=7098.76, stdev=561.60 00:20:57.169 clat percentiles (usec): 00:20:57.169 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:20:57.169 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:20:57.169 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:20:57.169 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[10552], 99.95th=[11338], 00:20:57.169 | 99.99th=[12387] 00:20:57.169 bw ( KiB/s): min=36184, max=38496, per=99.94%, avg=37556.00, stdev=989.30, samples=4 00:20:57.169 iops : min= 9046, max= 9624, avg=9389.00, stdev=247.32, samples=4 00:20:57.169 write: IOPS=9393, BW=36.7MiB/s (38.5MB/s)(73.6MiB/2006msec); 0 zone resets 00:20:57.169 slat (usec): min=2, max=240, avg= 2.71, stdev= 2.37 00:20:57.169 clat (usec): min=2453, max=12463, avg=6465.61, stdev=521.65 00:20:57.169 lat (usec): min=2466, max=12465, avg=6468.32, stdev=521.70 00:20:57.169 clat percentiles (usec): 00:20:57.169 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6063], 00:20:57.169 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:20:57.169 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7111], 95.00th=[ 7308], 00:20:57.169 | 99.00th=[ 7963], 99.50th=[ 8291], 99.90th=[10421], 99.95th=[11076], 00:20:57.169 | 99.99th=[12387] 00:20:57.169 bw ( KiB/s): min=37088, max=37888, per=99.99%, avg=37570.00, stdev=348.25, samples=4 00:20:57.169 iops : min= 9272, max= 9472, avg=9392.50, stdev=87.06, samples=4 00:20:57.169 lat (msec) : 4=0.08%, 10=99.80%, 20=0.12% 00:20:57.169 cpu : usr=67.93%, sys=22.89%, ctx=7, majf=0, minf=5 00:20:57.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:57.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:57.169 issued rwts: total=18846,18844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:57.169 00:20:57.169 Run status group 0 (all jobs): 00:20:57.169 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.6MiB (77.2MB), run=2006-2006msec 00:20:57.169 WRITE: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.6MiB (77.2MB), run=2006-2006msec 00:20:57.169 17:24:26 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:57.169 17:24:26 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:57.169 17:24:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:57.169 17:24:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.169 17:24:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:57.169 17:24:26 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.169 17:24:26 -- common/autotest_common.sh@1327 -- # shift 00:20:57.169 17:24:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:57.169 17:24:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.169 17:24:26 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.169 17:24:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:57.169 17:24:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:57.169 17:24:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:57.169 17:24:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:57.169 17:24:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.169 17:24:26 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.170 17:24:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:57.170 17:24:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:57.170 17:24:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:57.170 17:24:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:57.170 17:24:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:57.170 17:24:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:57.170 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:57.170 fio-3.35 00:20:57.170 Starting 1 thread 00:20:59.700 00:20:59.700 test: (groupid=0, jobs=1): err= 0: pid=88142: Thu Apr 25 17:24:29 2024 00:20:59.700 read: IOPS=8635, BW=135MiB/s (141MB/s)(270MiB/2001msec) 00:20:59.700 slat (usec): min=2, max=121, avg= 3.55, stdev= 2.17 00:20:59.700 clat (usec): min=2442, max=16489, avg=8693.16, stdev=2280.15 00:20:59.700 lat (usec): min=2445, max=16493, avg=8696.71, stdev=2280.27 00:20:59.700 clat percentiles (usec): 00:20:59.700 | 1.00th=[ 4490], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:20:59.700 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[ 9241], 00:20:59.700 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11863], 95.00th=[12911], 00:20:59.700 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15926], 99.95th=[16188], 00:20:59.700 | 99.99th=[16450] 00:20:59.700 bw ( KiB/s): min=67008, max=69216, per=49.38%, avg=68234.67, stdev=1124.26, samples=3 00:20:59.700 iops : min= 4188, max= 4326, avg=4264.67, stdev=70.27, samples=3 00:20:59.700 write: IOPS=5078, BW=79.4MiB/s (83.2MB/s)(144MiB/1819msec); 0 zone resets 00:20:59.700 slat (usec): min=31, max=349, avg=36.64, stdev= 9.58 00:20:59.700 clat (usec): min=5959, max=19452, avg=10797.79, stdev=1952.49 00:20:59.700 lat (usec): min=6009, max=19511, avg=10834.43, stdev=1954.10 00:20:59.700 clat percentiles (usec): 00:20:59.700 | 1.00th=[ 6980], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:20:59.700 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[11207], 00:20:59.700 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13304], 95.00th=[14091], 00:20:59.700 | 99.00th=[16188], 99.50th=[17695], 99.90th=[19006], 99.95th=[19268], 00:20:59.700 | 99.99th=[19530] 00:20:59.700 bw ( KiB/s): min=69792, max=72896, per=87.62%, avg=71200.00, stdev=1571.91, samples=3 00:20:59.700 iops : min= 4362, max= 4556, avg=4450.00, stdev=98.24, samples=3 00:20:59.700 lat (msec) : 4=0.15%, 10=59.46%, 20=40.38% 00:20:59.700 cpu : usr=74.60%, sys=16.75%, ctx=21, majf=0, minf=16 00:20:59.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:59.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.700 issued rwts: total=17280,9238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.700 00:20:59.700 Run status group 0 (all jobs): 00:20:59.700 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=270MiB (283MB), run=2001-2001msec 00:20:59.700 WRITE: bw=79.4MiB/s (83.2MB/s), 79.4MiB/s-79.4MiB/s (83.2MB/s-83.2MB/s), io=144MiB (151MB), run=1819-1819msec 00:20:59.700 17:24:29 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:20:59.700 17:24:29 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:20:59.700 17:24:29 -- host/fio.sh@49 -- # get_nvme_bdfs 00:20:59.700 17:24:29 -- common/autotest_common.sh@1499 -- # bdfs=() 00:20:59.700 17:24:29 -- common/autotest_common.sh@1499 -- # local bdfs 00:20:59.700 17:24:29 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:59.700 17:24:29 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:20:59.700 17:24:29 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:59.700 17:24:29 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:20:59.700 17:24:29 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:59.700 17:24:29 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 Nvme0n1 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@51 -- # ls_guid=1d4ce733-82c3-46ec-ab65-99977af0378a 00:20:59.700 17:24:29 -- host/fio.sh@52 -- # get_lvs_free_mb 1d4ce733-82c3-46ec-ab65-99977af0378a 00:20:59.700 17:24:29 -- common/autotest_common.sh@1350 -- # local lvs_uuid=1d4ce733-82c3-46ec-ab65-99977af0378a 00:20:59.700 17:24:29 -- common/autotest_common.sh@1351 -- # local lvs_info 00:20:59.700 17:24:29 -- common/autotest_common.sh@1352 -- # local fc 00:20:59.700 17:24:29 -- common/autotest_common.sh@1353 -- # local cs 00:20:59.700 17:24:29 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:20:59.700 { 00:20:59.700 "base_bdev": "Nvme0n1", 00:20:59.700 "block_size": 4096, 00:20:59.700 "cluster_size": 1073741824, 00:20:59.700 "free_clusters": 4, 00:20:59.700 "name": "lvs_0", 00:20:59.700 "total_data_clusters": 4, 00:20:59.700 "uuid": "1d4ce733-82c3-46ec-ab65-99977af0378a" 00:20:59.700 } 00:20:59.700 ]' 00:20:59.700 17:24:29 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="1d4ce733-82c3-46ec-ab65-99977af0378a") .free_clusters' 00:20:59.700 17:24:29 -- common/autotest_common.sh@1355 -- # fc=4 00:20:59.700 17:24:29 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="1d4ce733-82c3-46ec-ab65-99977af0378a") .cluster_size' 00:20:59.700 4096 00:20:59.700 17:24:29 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:20:59.700 17:24:29 -- common/autotest_common.sh@1359 -- # free_mb=4096 00:20:59.700 17:24:29 -- common/autotest_common.sh@1360 -- # echo 4096 00:20:59.700 17:24:29 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 098f5614-334b-41ef-aebb-494383d63986 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:59.700 17:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.700 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:20:59.700 17:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.700 17:24:29 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.701 17:24:29 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.701 17:24:29 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:59.701 17:24:29 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:59.701 17:24:29 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:59.701 17:24:29 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.701 17:24:29 -- common/autotest_common.sh@1327 -- # shift 00:20:59.701 17:24:29 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:59.701 17:24:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:59.701 17:24:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:59.701 17:24:29 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:59.701 17:24:29 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:59.701 17:24:29 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:59.701 17:24:29 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:59.701 17:24:29 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:59.701 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:59.701 fio-3.35 00:20:59.701 Starting 1 thread 00:21:02.232 00:21:02.232 test: (groupid=0, jobs=1): err= 0: pid=88227: Thu Apr 25 17:24:31 2024 00:21:02.232 read: IOPS=6155, BW=24.0MiB/s (25.2MB/s)(48.3MiB/2009msec) 00:21:02.232 slat (usec): min=2, max=323, avg= 2.93, stdev= 3.79 00:21:02.232 clat (usec): min=4223, max=18314, avg=10859.73, stdev=1009.72 00:21:02.232 lat (usec): min=4233, max=18316, avg=10862.66, stdev=1009.56 00:21:02.232 clat percentiles (usec): 00:21:02.232 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:21:02.232 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:21:02.232 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:21:02.232 | 99.00th=[14484], 99.50th=[15401], 99.90th=[17171], 99.95th=[17433], 00:21:02.232 | 99.99th=[18220] 00:21:02.232 bw ( KiB/s): min=23128, max=25288, per=99.91%, avg=24602.00, stdev=992.91, samples=4 00:21:02.232 iops : min= 5782, max= 6322, avg=6150.50, stdev=248.23, samples=4 00:21:02.232 write: IOPS=6139, BW=24.0MiB/s (25.1MB/s)(48.2MiB/2009msec); 0 zone resets 00:21:02.232 slat (usec): min=2, max=127, avg= 3.03, stdev= 2.45 00:21:02.232 clat (usec): min=1990, max=17839, avg=9819.02, stdev=956.22 00:21:02.232 lat (usec): min=2003, max=17841, avg=9822.05, stdev=956.13 00:21:02.232 clat percentiles (usec): 00:21:02.232 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:21:02.232 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:21:02.232 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:21:02.232 | 99.00th=[13173], 99.50th=[14091], 99.90th=[16188], 99.95th=[17171], 00:21:02.232 | 99.99th=[17695] 00:21:02.232 bw ( KiB/s): min=24008, max=24784, per=99.95%, avg=24546.00, stdev=361.12, samples=4 00:21:02.232 iops : min= 6002, max= 6196, avg=6136.50, stdev=90.28, samples=4 00:21:02.232 lat (msec) : 2=0.01%, 4=0.03%, 10=38.56%, 20=61.40% 00:21:02.232 cpu : usr=70.62%, sys=22.51%, ctx=5, majf=0, minf=23 00:21:02.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:02.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.232 issued rwts: total=12367,12334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.232 00:21:02.232 Run status group 0 (all jobs): 00:21:02.232 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2009-2009msec 00:21:02.232 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.2MiB (50.5MB), run=2009-2009msec 00:21:02.232 17:24:31 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:02.232 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:31 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:02.232 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:31 -- host/fio.sh@62 -- # ls_nested_guid=2ec5d70f-aff1-4082-90ad-36aecccbf657 00:21:02.232 17:24:31 -- host/fio.sh@63 -- # get_lvs_free_mb 2ec5d70f-aff1-4082-90ad-36aecccbf657 00:21:02.232 17:24:31 -- common/autotest_common.sh@1350 -- # local lvs_uuid=2ec5d70f-aff1-4082-90ad-36aecccbf657 00:21:02.232 17:24:31 -- common/autotest_common.sh@1351 -- # local lvs_info 00:21:02.232 17:24:31 -- common/autotest_common.sh@1352 -- # local fc 00:21:02.232 17:24:31 -- common/autotest_common.sh@1353 -- # local cs 00:21:02.232 17:24:31 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:02.232 17:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:31 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:21:02.232 { 00:21:02.232 "base_bdev": "Nvme0n1", 00:21:02.232 "block_size": 4096, 00:21:02.232 "cluster_size": 1073741824, 00:21:02.232 "free_clusters": 0, 00:21:02.232 "name": "lvs_0", 00:21:02.232 "total_data_clusters": 4, 00:21:02.232 "uuid": "1d4ce733-82c3-46ec-ab65-99977af0378a" 00:21:02.232 }, 00:21:02.232 { 00:21:02.232 "base_bdev": "098f5614-334b-41ef-aebb-494383d63986", 00:21:02.232 "block_size": 4096, 00:21:02.232 "cluster_size": 4194304, 00:21:02.232 "free_clusters": 1022, 00:21:02.232 "name": "lvs_n_0", 00:21:02.232 "total_data_clusters": 1022, 00:21:02.232 "uuid": "2ec5d70f-aff1-4082-90ad-36aecccbf657" 00:21:02.232 } 00:21:02.232 ]' 00:21:02.232 17:24:31 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="2ec5d70f-aff1-4082-90ad-36aecccbf657") .free_clusters' 00:21:02.232 17:24:31 -- common/autotest_common.sh@1355 -- # fc=1022 00:21:02.232 17:24:31 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="2ec5d70f-aff1-4082-90ad-36aecccbf657") .cluster_size' 00:21:02.232 4088 00:21:02.232 17:24:32 -- common/autotest_common.sh@1356 -- # cs=4194304 00:21:02.232 17:24:32 -- common/autotest_common.sh@1359 -- # free_mb=4088 00:21:02.232 17:24:32 -- common/autotest_common.sh@1360 -- # echo 4088 00:21:02.232 17:24:32 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:02.232 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 9365c79c-bb86-4fd3-9013-d51370b9a926 00:21:02.232 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:32 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:02.232 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:32 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:02.232 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:32 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:02.232 17:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.232 17:24:32 -- common/autotest_common.sh@10 -- # set +x 00:21:02.232 17:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.232 17:24:32 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.232 17:24:32 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.232 17:24:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:02.232 17:24:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:02.232 17:24:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:02.232 17:24:32 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.232 17:24:32 -- common/autotest_common.sh@1327 -- # shift 00:21:02.232 17:24:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:02.232 17:24:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:02.232 17:24:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:02.232 17:24:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:02.232 17:24:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:02.232 17:24:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:02.232 17:24:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:02.232 17:24:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:02.491 fio-3.35 00:21:02.491 Starting 1 thread 00:21:05.020 00:21:05.020 test: (groupid=0, jobs=1): err= 0: pid=88282: Thu Apr 25 17:24:34 2024 00:21:05.020 read: IOPS=5494, BW=21.5MiB/s (22.5MB/s)(44.0MiB/2052msec) 00:21:05.020 slat (nsec): min=1995, max=329201, avg=2924.24, stdev=4485.61 00:21:05.020 clat (usec): min=4584, max=63103, avg=12269.80, stdev=3712.94 00:21:05.020 lat (usec): min=4593, max=63106, avg=12272.72, stdev=3712.82 00:21:05.020 clat percentiles (usec): 00:21:05.020 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:21:05.020 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:21:05.020 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13566], 00:21:05.020 | 99.00th=[14615], 99.50th=[52691], 99.90th=[60031], 99.95th=[62129], 00:21:05.020 | 99.99th=[63177] 00:21:05.020 bw ( KiB/s): min=21504, max=22824, per=100.00%, avg=22404.00, stdev=610.61, samples=4 00:21:05.020 iops : min= 5376, max= 5706, avg=5601.00, stdev=152.65, samples=4 00:21:05.020 write: IOPS=5457, BW=21.3MiB/s (22.4MB/s)(43.7MiB/2052msec); 0 zone resets 00:21:05.020 slat (usec): min=2, max=269, avg= 3.02, stdev= 3.45 00:21:05.020 clat (usec): min=2494, max=63170, avg=10989.31, stdev=3274.38 00:21:05.020 lat (usec): min=2508, max=63173, avg=10992.33, stdev=3274.30 00:21:05.020 clat percentiles (usec): 00:21:05.020 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:21:05.020 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:21:05.020 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:21:05.020 | 99.00th=[12911], 99.50th=[13566], 99.90th=[58459], 99.95th=[60556], 00:21:05.020 | 99.99th=[61604] 00:21:05.020 bw ( KiB/s): min=21992, max=22552, per=100.00%, avg=22288.00, stdev=267.25, samples=4 00:21:05.020 iops : min= 5498, max= 5638, avg=5572.00, stdev=66.81, samples=4 00:21:05.020 lat (msec) : 4=0.04%, 10=9.24%, 20=90.16%, 100=0.57% 00:21:05.020 cpu : usr=69.97%, sys=23.31%, ctx=42, majf=0, minf=23 00:21:05.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:05.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.020 issued rwts: total=11274,11199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.020 00:21:05.020 Run status group 0 (all jobs): 00:21:05.020 READ: bw=21.5MiB/s (22.5MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=44.0MiB (46.2MB), run=2052-2052msec 00:21:05.020 WRITE: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=43.7MiB (45.9MB), run=2052-2052msec 00:21:05.020 17:24:34 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:05.020 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.020 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.020 17:24:34 -- host/fio.sh@72 -- # sync 00:21:05.020 17:24:34 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:05.020 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.020 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.020 17:24:34 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:21:05.020 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.020 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.020 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.020 17:24:34 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:21:05.021 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.021 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.021 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.021 17:24:34 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:21:05.021 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.021 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.021 17:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.021 17:24:34 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:21:05.021 17:24:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.021 17:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:05.588 17:24:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.588 17:24:35 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:05.588 17:24:35 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:05.588 17:24:35 -- host/fio.sh@84 -- # nvmftestfini 00:21:05.588 17:24:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:05.588 17:24:35 -- nvmf/common.sh@117 -- # sync 00:21:05.588 17:24:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.588 17:24:35 -- nvmf/common.sh@120 -- # set +e 00:21:05.588 17:24:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.588 17:24:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.588 rmmod nvme_tcp 00:21:05.588 rmmod nvme_fabrics 00:21:05.588 rmmod nvme_keyring 00:21:05.588 17:24:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.588 17:24:35 -- nvmf/common.sh@124 -- # set -e 00:21:05.588 17:24:35 -- nvmf/common.sh@125 -- # return 0 00:21:05.588 17:24:35 -- nvmf/common.sh@478 -- # '[' -n 88020 ']' 00:21:05.588 17:24:35 -- nvmf/common.sh@479 -- # killprocess 88020 00:21:05.588 17:24:35 -- common/autotest_common.sh@936 -- # '[' -z 88020 ']' 00:21:05.588 17:24:35 -- common/autotest_common.sh@940 -- # kill -0 88020 00:21:05.588 17:24:35 -- common/autotest_common.sh@941 -- # uname 00:21:05.588 17:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.588 17:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88020 00:21:05.589 killing process with pid 88020 00:21:05.589 17:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:05.589 17:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:05.589 17:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88020' 00:21:05.589 17:24:35 -- common/autotest_common.sh@955 -- # kill 88020 00:21:05.589 17:24:35 -- common/autotest_common.sh@960 -- # wait 88020 00:21:05.589 17:24:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:05.589 17:24:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:05.589 17:24:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:05.589 17:24:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.589 17:24:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.589 17:24:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.589 17:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.589 17:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.848 17:24:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:05.848 00:21:05.848 real 0m13.073s 00:21:05.848 user 0m53.559s 00:21:05.848 sys 0m3.445s 00:21:05.848 17:24:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.848 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.848 ************************************ 00:21:05.848 END TEST nvmf_fio_host 00:21:05.848 ************************************ 00:21:05.848 17:24:35 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:05.848 17:24:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.848 17:24:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.848 17:24:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.848 ************************************ 00:21:05.848 START TEST nvmf_failover 00:21:05.848 ************************************ 00:21:05.848 17:24:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:05.848 * Looking for test storage... 00:21:05.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:05.848 17:24:35 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.848 17:24:35 -- nvmf/common.sh@7 -- # uname -s 00:21:05.848 17:24:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.848 17:24:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.848 17:24:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.848 17:24:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.848 17:24:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.848 17:24:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.848 17:24:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.848 17:24:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.848 17:24:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.848 17:24:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:05.848 17:24:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:05.848 17:24:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.848 17:24:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.848 17:24:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.848 17:24:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.848 17:24:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.848 17:24:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.848 17:24:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.848 17:24:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.848 17:24:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.848 17:24:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.848 17:24:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.848 17:24:35 -- paths/export.sh@5 -- # export PATH 00:21:05.848 17:24:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.848 17:24:35 -- nvmf/common.sh@47 -- # : 0 00:21:05.848 17:24:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.848 17:24:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.848 17:24:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.848 17:24:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.848 17:24:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.848 17:24:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.848 17:24:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.848 17:24:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.848 17:24:35 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.848 17:24:35 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.848 17:24:35 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:05.848 17:24:35 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.848 17:24:35 -- host/failover.sh@18 -- # nvmftestinit 00:21:05.848 17:24:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:05.848 17:24:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.848 17:24:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:05.848 17:24:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:05.848 17:24:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:05.848 17:24:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.848 17:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.848 17:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.848 17:24:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:05.848 17:24:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:05.848 17:24:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.848 17:24:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.848 17:24:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:05.848 17:24:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:05.848 17:24:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:05.848 17:24:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:05.848 17:24:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:05.848 17:24:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.848 17:24:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:05.848 17:24:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:05.849 17:24:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:05.849 17:24:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:05.849 17:24:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:05.849 17:24:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:06.108 Cannot find device "nvmf_tgt_br" 00:21:06.108 17:24:35 -- nvmf/common.sh@155 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.108 Cannot find device "nvmf_tgt_br2" 00:21:06.108 17:24:35 -- nvmf/common.sh@156 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:06.108 17:24:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:06.108 Cannot find device "nvmf_tgt_br" 00:21:06.108 17:24:35 -- nvmf/common.sh@158 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:06.108 Cannot find device "nvmf_tgt_br2" 00:21:06.108 17:24:35 -- nvmf/common.sh@159 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:06.108 17:24:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:06.108 17:24:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.108 17:24:35 -- nvmf/common.sh@162 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:06.108 17:24:35 -- nvmf/common.sh@163 -- # true 00:21:06.108 17:24:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:06.108 17:24:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:06.108 17:24:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:06.108 17:24:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:06.108 17:24:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:06.108 17:24:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:06.108 17:24:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:06.108 17:24:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:06.108 17:24:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:06.108 17:24:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:06.108 17:24:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:06.108 17:24:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:06.108 17:24:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:06.108 17:24:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:06.108 17:24:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:06.367 17:24:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:06.367 17:24:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:06.367 17:24:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:06.367 17:24:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:06.367 17:24:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:06.367 17:24:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:06.367 17:24:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:06.367 17:24:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:06.367 17:24:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:06.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:06.367 00:21:06.367 --- 10.0.0.2 ping statistics --- 00:21:06.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.367 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:06.367 17:24:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:06.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:06.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:06.367 00:21:06.367 --- 10.0.0.3 ping statistics --- 00:21:06.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.367 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:06.367 17:24:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:06.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:06.367 00:21:06.367 --- 10.0.0.1 ping statistics --- 00:21:06.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.367 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:06.367 17:24:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.367 17:24:36 -- nvmf/common.sh@422 -- # return 0 00:21:06.367 17:24:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:06.367 17:24:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.367 17:24:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:06.367 17:24:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:06.367 17:24:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.367 17:24:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:06.367 17:24:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:06.367 17:24:36 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:06.367 17:24:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:06.367 17:24:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:06.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 17:24:36 -- nvmf/common.sh@470 -- # nvmfpid=88501 00:21:06.367 17:24:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:06.367 17:24:36 -- nvmf/common.sh@471 -- # waitforlisten 88501 00:21:06.367 17:24:36 -- common/autotest_common.sh@817 -- # '[' -z 88501 ']' 00:21:06.367 17:24:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.367 17:24:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.367 17:24:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.367 17:24:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:21:06.367 [2024-04-25 17:24:36.248467] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:06.367 [2024-04-25 17:24:36.248550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.626 [2024-04-25 17:24:36.387373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.626 [2024-04-25 17:24:36.455981] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.626 [2024-04-25 17:24:36.456048] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.626 [2024-04-25 17:24:36.456062] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.626 [2024-04-25 17:24:36.456072] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.626 [2024-04-25 17:24:36.456082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.626 [2024-04-25 17:24:36.456473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.626 [2024-04-25 17:24:36.456814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.626 [2024-04-25 17:24:36.456822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.192 17:24:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:07.192 17:24:37 -- common/autotest_common.sh@850 -- # return 0 00:21:07.192 17:24:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:07.192 17:24:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.192 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:21:07.450 17:24:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.450 17:24:37 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.450 [2024-04-25 17:24:37.362060] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.450 17:24:37 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:07.709 Malloc0 00:21:07.967 17:24:37 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.225 17:24:37 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.484 17:24:38 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.484 [2024-04-25 17:24:38.410597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.484 17:24:38 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:08.742 [2024-04-25 17:24:38.626729] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:08.742 17:24:38 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:09.001 [2024-04-25 17:24:38.826908] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:09.001 17:24:38 -- host/failover.sh@31 -- # bdevperf_pid=88613 00:21:09.001 17:24:38 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:09.001 17:24:38 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.001 17:24:38 -- host/failover.sh@34 -- # waitforlisten 88613 /var/tmp/bdevperf.sock 00:21:09.001 17:24:38 -- common/autotest_common.sh@817 -- # '[' -z 88613 ']' 00:21:09.001 17:24:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.001 17:24:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.001 17:24:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.001 17:24:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.001 17:24:38 -- common/autotest_common.sh@10 -- # set +x 00:21:09.259 17:24:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.259 17:24:39 -- common/autotest_common.sh@850 -- # return 0 00:21:09.259 17:24:39 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.517 NVMe0n1 00:21:09.517 17:24:39 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:09.775 00:21:09.775 17:24:39 -- host/failover.sh@39 -- # run_test_pid=88648 00:21:09.775 17:24:39 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.775 17:24:39 -- host/failover.sh@41 -- # sleep 1 00:21:11.149 17:24:40 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.149 [2024-04-25 17:24:40.973212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 [2024-04-25 17:24:40.973689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa26720 is same with the state(5) to be set 00:21:11.149 17:24:40 -- host/failover.sh@45 -- # sleep 3 00:21:14.433 17:24:43 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.433 00:21:14.433 17:24:44 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:14.693 [2024-04-25 17:24:44.545584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.545997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.693 [2024-04-25 17:24:44.546243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546565] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 [2024-04-25 17:24:44.546660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa272c0 is same with the state(5) to be set 00:21:14.694 17:24:44 -- host/failover.sh@50 -- # sleep 3 00:21:17.975 17:24:47 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.975 [2024-04-25 17:24:47.798660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.975 17:24:47 -- host/failover.sh@55 -- # sleep 1 00:21:18.930 17:24:48 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:19.216 [2024-04-25 17:24:49.045762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 [2024-04-25 17:24:49.045871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87f7b0 is same with the state(5) to be set 00:21:19.216 17:24:49 -- host/failover.sh@59 -- # wait 88648 00:21:25.787 0 00:21:25.787 17:24:54 -- host/failover.sh@61 -- # killprocess 88613 00:21:25.787 17:24:54 -- common/autotest_common.sh@936 -- # '[' -z 88613 ']' 00:21:25.787 17:24:54 -- common/autotest_common.sh@940 -- # kill -0 88613 00:21:25.787 17:24:54 -- common/autotest_common.sh@941 -- # uname 00:21:25.787 17:24:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.787 17:24:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88613 00:21:25.787 killing process with pid 88613 00:21:25.787 17:24:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:25.787 17:24:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:25.787 17:24:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88613' 00:21:25.787 17:24:54 -- common/autotest_common.sh@955 -- # kill 88613 00:21:25.787 17:24:54 -- common/autotest_common.sh@960 -- # wait 88613 00:21:25.787 17:24:55 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:25.787 [2024-04-25 17:24:38.887983] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:25.787 [2024-04-25 17:24:38.888208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88613 ] 00:21:25.787 [2024-04-25 17:24:39.029683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.787 [2024-04-25 17:24:39.095434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.787 Running I/O for 15 seconds... 00:21:25.787 [2024-04-25 17:24:40.973832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.787 [2024-04-25 17:24:40.973881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.787 [2024-04-25 17:24:40.973898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.787 [2024-04-25 17:24:40.973912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.787 [2024-04-25 17:24:40.973925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.787 [2024-04-25 17:24:40.973938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.973951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.788 [2024-04-25 17:24:40.973963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.973976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1126220 is same with the state(5) to be set 00:21:25.788 [2024-04-25 17:24:40.974022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.788 [2024-04-25 17:24:40.974317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.974968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.974990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.788 [2024-04-25 17:24:40.975003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.788 [2024-04-25 17:24:40.975017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.789 [2024-04-25 17:24:40.975685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.789 [2024-04-25 17:24:40.975959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.789 [2024-04-25 17:24:40.975972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.975986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.975999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.790 [2024-04-25 17:24:40.976870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.790 [2024-04-25 17:24:40.976899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.790 [2024-04-25 17:24:40.976935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.790 [2024-04-25 17:24:40.976964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.976979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.790 [2024-04-25 17:24:40.976993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.790 [2024-04-25 17:24:40.977008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.791 [2024-04-25 17:24:40.977931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.977959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.791 [2024-04-25 17:24:40.977973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.791 [2024-04-25 17:24:40.977983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:21:25.791 [2024-04-25 17:24:40.977996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.791 [2024-04-25 17:24:40.978039] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x116e940 was disconnected and freed. reset controller. 00:21:25.792 [2024-04-25 17:24:40.978062] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:25.792 [2024-04-25 17:24:40.978090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.792 [2024-04-25 17:24:40.981960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.792 [2024-04-25 17:24:40.981994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126220 (9): Bad file descriptor 00:21:25.792 [2024-04-25 17:24:41.020465] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.792 [2024-04-25 17:24:44.546770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.546826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.546851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.546889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.546907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.546921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.546936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.546949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.546964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.546978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.546993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.792 [2024-04-25 17:24:44.547702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.792 [2024-04-25 17:24:44.547730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.547977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.547992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.793 [2024-04-25 17:24:44.548379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.793 [2024-04-25 17:24:44.548394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.548975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.548989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.794 [2024-04-25 17:24:44.549369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.794 [2024-04-25 17:24:44.549384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.795 [2024-04-25 17:24:44.549803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.549969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.549989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.795 [2024-04-25 17:24:44.550418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.795 [2024-04-25 17:24:44.550434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:44.550748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1129b30 is same with the state(5) to be set 00:21:25.796 [2024-04-25 17:24:44.550787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.796 [2024-04-25 17:24:44.550799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.796 [2024-04-25 17:24:44.550812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123392 len:8 PRP1 0x0 PRP2 0x0 00:21:25.796 [2024-04-25 17:24:44.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550875] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1129b30 was disconnected and freed. reset controller. 00:21:25.796 [2024-04-25 17:24:44.550894] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:25.796 [2024-04-25 17:24:44.550947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.796 [2024-04-25 17:24:44.550968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.550984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.796 [2024-04-25 17:24:44.550998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.551012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.796 [2024-04-25 17:24:44.551025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.551039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.796 [2024-04-25 17:24:44.551052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:44.551066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.796 [2024-04-25 17:24:44.554885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.796 [2024-04-25 17:24:44.554924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126220 (9): Bad file descriptor 00:21:25.796 [2024-04-25 17:24:44.590511] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.796 [2024-04-25 17:24:49.046611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.046983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.046997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.047012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.047026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.047071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.047100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.796 [2024-04-25 17:24:49.047128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.796 [2024-04-25 17:24:49.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.797 [2024-04-25 17:24:49.047956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.047971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.047985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.048024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.048069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.048126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.048153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.797 [2024-04-25 17:24:49.048179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.797 [2024-04-25 17:24:49.048193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.798 [2024-04-25 17:24:49.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.048979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.048993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.049008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.798 [2024-04-25 17:24:49.049022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.798 [2024-04-25 17:24:49.049038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.799 [2024-04-25 17:24:49.049270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.799 [2024-04-25 17:24:49.049954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.799 [2024-04-25 17:24:49.049968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.049983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.800 [2024-04-25 17:24:49.049997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.800 [2024-04-25 17:24:49.050027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.800 [2024-04-25 17:24:49.050062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.800 [2024-04-25 17:24:49.050092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.800 [2024-04-25 17:24:49.050165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91560 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91568 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91576 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91584 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91592 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91600 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91608 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91616 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91624 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91632 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91640 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90816 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90824 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90832 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.800 [2024-04-25 17:24:49.050914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.800 [2024-04-25 17:24:49.050924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.800 [2024-04-25 17:24:49.050934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90840 len:8 PRP1 0x0 PRP2 0x0 00:21:25.800 [2024-04-25 17:24:49.050947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.050960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.801 [2024-04-25 17:24:49.050970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.801 [2024-04-25 17:24:49.050980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90848 len:8 PRP1 0x0 PRP2 0x0 00:21:25.801 [2024-04-25 17:24:49.050993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.801 [2024-04-25 17:24:49.051016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.801 [2024-04-25 17:24:49.051026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90856 len:8 PRP1 0x0 PRP2 0x0 00:21:25.801 [2024-04-25 17:24:49.051039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.801 [2024-04-25 17:24:49.051062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.801 [2024-04-25 17:24:49.051074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90864 len:8 PRP1 0x0 PRP2 0x0 00:21:25.801 [2024-04-25 17:24:49.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.801 [2024-04-25 17:24:49.051140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.801 [2024-04-25 17:24:49.051149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90872 len:8 PRP1 0x0 PRP2 0x0 00:21:25.801 [2024-04-25 17:24:49.051161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051206] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11906d0 was disconnected and freed. reset controller. 00:21:25.801 [2024-04-25 17:24:49.051223] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:25.801 [2024-04-25 17:24:49.051273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.801 [2024-04-25 17:24:49.051292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.801 [2024-04-25 17:24:49.051318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.801 [2024-04-25 17:24:49.051353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.801 [2024-04-25 17:24:49.051378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.801 [2024-04-25 17:24:49.051390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.801 [2024-04-25 17:24:49.055194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.801 [2024-04-25 17:24:49.055229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1126220 (9): Bad file descriptor 00:21:25.801 [2024-04-25 17:24:49.086267] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:25.801 00:21:25.801 Latency(us) 00:21:25.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:25.801 Verification LBA range: start 0x0 length 0x4000 00:21:25.801 NVMe0n1 : 15.01 9927.51 38.78 225.19 0.00 12579.69 498.97 50522.30 00:21:25.801 =================================================================================================================== 00:21:25.801 Total : 9927.51 38.78 225.19 0.00 12579.69 498.97 50522.30 00:21:25.801 Received shutdown signal, test time was about 15.000000 seconds 00:21:25.801 00:21:25.801 Latency(us) 00:21:25.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.801 =================================================================================================================== 00:21:25.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.801 17:24:55 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:25.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.801 17:24:55 -- host/failover.sh@65 -- # count=3 00:21:25.801 17:24:55 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:25.801 17:24:55 -- host/failover.sh@73 -- # bdevperf_pid=88848 00:21:25.801 17:24:55 -- host/failover.sh@75 -- # waitforlisten 88848 /var/tmp/bdevperf.sock 00:21:25.801 17:24:55 -- common/autotest_common.sh@817 -- # '[' -z 88848 ']' 00:21:25.801 17:24:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.801 17:24:55 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:25.801 17:24:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.801 17:24:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.801 17:24:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.801 17:24:55 -- common/autotest_common.sh@10 -- # set +x 00:21:25.801 17:24:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.801 17:24:55 -- common/autotest_common.sh@850 -- # return 0 00:21:25.801 17:24:55 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:25.801 [2024-04-25 17:24:55.596761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:25.801 17:24:55 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:26.060 [2024-04-25 17:24:55.796899] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:26.060 17:24:55 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.319 NVMe0n1 00:21:26.319 17:24:56 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.578 00:21:26.578 17:24:56 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.838 00:21:26.838 17:24:56 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.838 17:24:56 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:27.097 17:24:56 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.355 17:24:57 -- host/failover.sh@87 -- # sleep 3 00:21:30.644 17:25:00 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.644 17:25:00 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:30.644 17:25:00 -- host/failover.sh@90 -- # run_test_pid=88971 00:21:30.644 17:25:00 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.644 17:25:00 -- host/failover.sh@92 -- # wait 88971 00:21:31.580 0 00:21:31.580 17:25:01 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:31.580 [2024-04-25 17:24:55.102097] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:31.580 [2024-04-25 17:24:55.102188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88848 ] 00:21:31.580 [2024-04-25 17:24:55.235603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.580 [2024-04-25 17:24:55.285358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.580 [2024-04-25 17:24:57.092683] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:31.580 [2024-04-25 17:24:57.092812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.580 [2024-04-25 17:24:57.092836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.580 [2024-04-25 17:24:57.092853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.580 [2024-04-25 17:24:57.092866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.580 [2024-04-25 17:24:57.092879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.580 [2024-04-25 17:24:57.092892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.580 [2024-04-25 17:24:57.092906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.580 [2024-04-25 17:24:57.092918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.580 [2024-04-25 17:24:57.092931] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.580 [2024-04-25 17:24:57.092972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.580 [2024-04-25 17:24:57.093000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8a220 (9): Bad file descriptor 00:21:31.580 [2024-04-25 17:24:57.101944] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:31.580 Running I/O for 1 seconds... 00:21:31.580 00:21:31.580 Latency(us) 00:21:31.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:31.580 Verification LBA range: start 0x0 length 0x4000 00:21:31.580 NVMe0n1 : 1.01 10295.06 40.22 0.00 0.00 12370.19 1846.92 12928.47 00:21:31.580 =================================================================================================================== 00:21:31.580 Total : 10295.06 40.22 0.00 0.00 12370.19 1846.92 12928.47 00:21:31.580 17:25:01 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.580 17:25:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:31.838 17:25:01 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.096 17:25:01 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.096 17:25:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:32.354 17:25:02 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.611 17:25:02 -- host/failover.sh@101 -- # sleep 3 00:21:35.896 17:25:05 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.896 17:25:05 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:35.896 17:25:05 -- host/failover.sh@108 -- # killprocess 88848 00:21:35.896 17:25:05 -- common/autotest_common.sh@936 -- # '[' -z 88848 ']' 00:21:35.896 17:25:05 -- common/autotest_common.sh@940 -- # kill -0 88848 00:21:35.896 17:25:05 -- common/autotest_common.sh@941 -- # uname 00:21:35.896 17:25:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.896 17:25:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88848 00:21:35.896 killing process with pid 88848 00:21:35.896 17:25:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.896 17:25:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.896 17:25:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88848' 00:21:35.896 17:25:05 -- common/autotest_common.sh@955 -- # kill 88848 00:21:35.896 17:25:05 -- common/autotest_common.sh@960 -- # wait 88848 00:21:35.896 17:25:05 -- host/failover.sh@110 -- # sync 00:21:35.896 17:25:05 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.155 17:25:06 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:36.155 17:25:06 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:36.415 17:25:06 -- host/failover.sh@116 -- # nvmftestfini 00:21:36.415 17:25:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:36.415 17:25:06 -- nvmf/common.sh@117 -- # sync 00:21:36.415 17:25:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.415 17:25:06 -- nvmf/common.sh@120 -- # set +e 00:21:36.415 17:25:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.415 17:25:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.415 rmmod nvme_tcp 00:21:36.415 rmmod nvme_fabrics 00:21:36.415 rmmod nvme_keyring 00:21:36.415 17:25:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.415 17:25:06 -- nvmf/common.sh@124 -- # set -e 00:21:36.415 17:25:06 -- nvmf/common.sh@125 -- # return 0 00:21:36.415 17:25:06 -- nvmf/common.sh@478 -- # '[' -n 88501 ']' 00:21:36.415 17:25:06 -- nvmf/common.sh@479 -- # killprocess 88501 00:21:36.415 17:25:06 -- common/autotest_common.sh@936 -- # '[' -z 88501 ']' 00:21:36.415 17:25:06 -- common/autotest_common.sh@940 -- # kill -0 88501 00:21:36.415 17:25:06 -- common/autotest_common.sh@941 -- # uname 00:21:36.415 17:25:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.415 17:25:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88501 00:21:36.415 killing process with pid 88501 00:21:36.415 17:25:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:36.415 17:25:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:36.415 17:25:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88501' 00:21:36.415 17:25:06 -- common/autotest_common.sh@955 -- # kill 88501 00:21:36.415 17:25:06 -- common/autotest_common.sh@960 -- # wait 88501 00:21:36.674 17:25:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:36.674 17:25:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:36.674 17:25:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:36.674 17:25:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.674 17:25:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.674 17:25:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.674 17:25:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.674 17:25:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.674 17:25:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:36.674 00:21:36.674 real 0m30.770s 00:21:36.674 user 1m59.207s 00:21:36.674 sys 0m4.164s 00:21:36.674 17:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:36.674 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:21:36.674 ************************************ 00:21:36.674 END TEST nvmf_failover 00:21:36.674 ************************************ 00:21:36.674 17:25:06 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:36.674 17:25:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.674 17:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.674 17:25:06 -- common/autotest_common.sh@10 -- # set +x 00:21:36.674 ************************************ 00:21:36.674 START TEST nvmf_discovery 00:21:36.675 ************************************ 00:21:36.675 17:25:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:36.934 * Looking for test storage... 00:21:36.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.934 17:25:06 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.934 17:25:06 -- nvmf/common.sh@7 -- # uname -s 00:21:36.934 17:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.934 17:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.934 17:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.934 17:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.934 17:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.934 17:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.934 17:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.934 17:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.934 17:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.934 17:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.934 17:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:36.934 17:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:36.934 17:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.934 17:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.934 17:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.934 17:25:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.934 17:25:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.934 17:25:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.934 17:25:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.934 17:25:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.934 17:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.934 17:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.935 17:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.935 17:25:06 -- paths/export.sh@5 -- # export PATH 00:21:36.935 17:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.935 17:25:06 -- nvmf/common.sh@47 -- # : 0 00:21:36.935 17:25:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.935 17:25:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.935 17:25:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.935 17:25:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.935 17:25:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.935 17:25:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.935 17:25:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.935 17:25:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.935 17:25:06 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:36.935 17:25:06 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:36.935 17:25:06 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:36.935 17:25:06 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:36.935 17:25:06 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:36.935 17:25:06 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:36.935 17:25:06 -- host/discovery.sh@25 -- # nvmftestinit 00:21:36.935 17:25:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:36.935 17:25:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.935 17:25:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:36.935 17:25:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:36.935 17:25:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:36.935 17:25:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.935 17:25:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.935 17:25:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.935 17:25:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:36.935 17:25:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:36.935 17:25:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:36.935 17:25:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:36.935 17:25:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:36.935 17:25:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:36.935 17:25:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.935 17:25:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.935 17:25:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:36.935 17:25:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:36.935 17:25:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.935 17:25:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.935 17:25:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.935 17:25:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.935 17:25:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.935 17:25:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.935 17:25:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.935 17:25:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.935 17:25:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:36.935 17:25:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:36.935 Cannot find device "nvmf_tgt_br" 00:21:36.935 17:25:06 -- nvmf/common.sh@155 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.935 Cannot find device "nvmf_tgt_br2" 00:21:36.935 17:25:06 -- nvmf/common.sh@156 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:36.935 17:25:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:36.935 Cannot find device "nvmf_tgt_br" 00:21:36.935 17:25:06 -- nvmf/common.sh@158 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:36.935 Cannot find device "nvmf_tgt_br2" 00:21:36.935 17:25:06 -- nvmf/common.sh@159 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:36.935 17:25:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:36.935 17:25:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.935 17:25:06 -- nvmf/common.sh@162 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.935 17:25:06 -- nvmf/common.sh@163 -- # true 00:21:36.935 17:25:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.935 17:25:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.935 17:25:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.935 17:25:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.935 17:25:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.935 17:25:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.935 17:25:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.935 17:25:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:36.935 17:25:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:36.935 17:25:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:36.935 17:25:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:36.935 17:25:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:36.935 17:25:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:36.935 17:25:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:37.193 17:25:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:37.193 17:25:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:37.193 17:25:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:37.193 17:25:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:37.193 17:25:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:37.193 17:25:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:37.193 17:25:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:37.193 17:25:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:37.193 17:25:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:37.193 17:25:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:37.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:21:37.193 00:21:37.193 --- 10.0.0.2 ping statistics --- 00:21:37.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.193 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:37.193 17:25:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:37.193 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:37.193 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:37.193 00:21:37.193 --- 10.0.0.3 ping statistics --- 00:21:37.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.193 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:37.193 17:25:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:37.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:37.193 00:21:37.193 --- 10.0.0.1 ping statistics --- 00:21:37.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.193 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:37.193 17:25:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.193 17:25:06 -- nvmf/common.sh@422 -- # return 0 00:21:37.193 17:25:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:37.193 17:25:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.193 17:25:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:37.193 17:25:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:37.193 17:25:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.193 17:25:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:37.193 17:25:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:37.193 17:25:07 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:37.193 17:25:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:37.193 17:25:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:37.193 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.193 17:25:07 -- nvmf/common.sh@470 -- # nvmfpid=89272 00:21:37.193 17:25:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:37.193 17:25:07 -- nvmf/common.sh@471 -- # waitforlisten 89272 00:21:37.193 17:25:07 -- common/autotest_common.sh@817 -- # '[' -z 89272 ']' 00:21:37.193 17:25:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.193 17:25:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:37.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.193 17:25:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.193 17:25:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:37.193 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.193 [2024-04-25 17:25:07.075378] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:37.193 [2024-04-25 17:25:07.075469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.451 [2024-04-25 17:25:07.208056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.451 [2024-04-25 17:25:07.259379] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.451 [2024-04-25 17:25:07.259447] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.451 [2024-04-25 17:25:07.259472] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.451 [2024-04-25 17:25:07.259479] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.451 [2024-04-25 17:25:07.259485] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.451 [2024-04-25 17:25:07.259521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.451 17:25:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.451 17:25:07 -- common/autotest_common.sh@850 -- # return 0 00:21:37.451 17:25:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:37.451 17:25:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 17:25:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.451 17:25:07 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:37.451 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 [2024-04-25 17:25:07.388325] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.451 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.451 17:25:07 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:37.451 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 [2024-04-25 17:25:07.396460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:37.451 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.451 17:25:07 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:37.451 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 null0 00:21:37.451 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.451 17:25:07 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:37.451 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 null1 00:21:37.451 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.451 17:25:07 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:37.451 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.451 17:25:07 -- host/discovery.sh@45 -- # hostpid=89309 00:21:37.451 17:25:07 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:37.451 17:25:07 -- host/discovery.sh@46 -- # waitforlisten 89309 /tmp/host.sock 00:21:37.451 17:25:07 -- common/autotest_common.sh@817 -- # '[' -z 89309 ']' 00:21:37.451 17:25:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:37.451 17:25:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:37.451 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:37.451 17:25:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:37.451 17:25:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:37.451 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.710 [2024-04-25 17:25:07.474422] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:37.710 [2024-04-25 17:25:07.474515] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89309 ] 00:21:37.710 [2024-04-25 17:25:07.610808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.710 [2024-04-25 17:25:07.678063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.969 17:25:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:37.969 17:25:07 -- common/autotest_common.sh@850 -- # return 0 00:21:37.969 17:25:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.969 17:25:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.969 17:25:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.969 17:25:07 -- host/discovery.sh@72 -- # notify_id=0 00:21:37.969 17:25:07 -- host/discovery.sh@83 -- # get_subsystem_names 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # sort 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # xargs 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.969 17:25:07 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:37.969 17:25:07 -- host/discovery.sh@84 -- # get_bdev_list 00:21:37.969 17:25:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.969 17:25:07 -- host/discovery.sh@55 -- # sort 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.969 17:25:07 -- host/discovery.sh@55 -- # xargs 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.969 17:25:07 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:37.969 17:25:07 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.969 17:25:07 -- host/discovery.sh@87 -- # get_subsystem_names 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.969 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # xargs 00:21:37.969 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:37.969 17:25:07 -- host/discovery.sh@59 -- # sort 00:21:37.969 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.227 17:25:07 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:38.227 17:25:07 -- host/discovery.sh@88 -- # get_bdev_list 00:21:38.227 17:25:07 -- host/discovery.sh@55 -- # sort 00:21:38.227 17:25:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.227 17:25:07 -- host/discovery.sh@55 -- # xargs 00:21:38.227 17:25:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.227 17:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.227 17:25:07 -- common/autotest_common.sh@10 -- # set +x 00:21:38.227 17:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.227 17:25:08 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:38.227 17:25:08 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:38.227 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.227 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.227 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.227 17:25:08 -- host/discovery.sh@91 -- # get_subsystem_names 00:21:38.227 17:25:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.227 17:25:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.227 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.227 17:25:08 -- host/discovery.sh@59 -- # xargs 00:21:38.227 17:25:08 -- host/discovery.sh@59 -- # sort 00:21:38.227 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.227 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.227 17:25:08 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:38.227 17:25:08 -- host/discovery.sh@92 -- # get_bdev_list 00:21:38.227 17:25:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.227 17:25:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.228 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.228 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # sort 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # xargs 00:21:38.228 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.228 17:25:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:38.228 17:25:08 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:38.228 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.228 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.228 [2024-04-25 17:25:08.136644] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.228 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.228 17:25:08 -- host/discovery.sh@97 -- # get_subsystem_names 00:21:38.228 17:25:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.228 17:25:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.228 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.228 17:25:08 -- host/discovery.sh@59 -- # sort 00:21:38.228 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.228 17:25:08 -- host/discovery.sh@59 -- # xargs 00:21:38.228 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.228 17:25:08 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:38.228 17:25:08 -- host/discovery.sh@98 -- # get_bdev_list 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.228 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.228 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # sort 00:21:38.228 17:25:08 -- host/discovery.sh@55 -- # xargs 00:21:38.487 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.487 17:25:08 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:38.487 17:25:08 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:38.487 17:25:08 -- host/discovery.sh@79 -- # expected_count=0 00:21:38.487 17:25:08 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:38.487 17:25:08 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:38.487 17:25:08 -- common/autotest_common.sh@901 -- # local max=10 00:21:38.487 17:25:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:38.487 17:25:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:38.487 17:25:08 -- host/discovery.sh@74 -- # jq '. | length' 00:21:38.487 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.487 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.487 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.487 17:25:08 -- host/discovery.sh@74 -- # notification_count=0 00:21:38.487 17:25:08 -- host/discovery.sh@75 -- # notify_id=0 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:38.487 17:25:08 -- common/autotest_common.sh@904 -- # return 0 00:21:38.487 17:25:08 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:38.487 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.487 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.487 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.487 17:25:08 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.487 17:25:08 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:38.487 17:25:08 -- common/autotest_common.sh@901 -- # local max=10 00:21:38.487 17:25:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:38.487 17:25:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:38.487 17:25:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:38.487 17:25:08 -- host/discovery.sh@59 -- # sort 00:21:38.487 17:25:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.487 17:25:08 -- host/discovery.sh@59 -- # xargs 00:21:38.487 17:25:08 -- common/autotest_common.sh@10 -- # set +x 00:21:38.487 17:25:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.487 17:25:08 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:21:38.487 17:25:08 -- common/autotest_common.sh@906 -- # sleep 1 00:21:39.056 [2024-04-25 17:25:08.780974] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:39.056 [2024-04-25 17:25:08.780997] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:39.056 [2024-04-25 17:25:08.781029] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.056 [2024-04-25 17:25:08.867117] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:39.056 [2024-04-25 17:25:08.922510] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:39.056 [2024-04-25 17:25:08.922532] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:39.624 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:39.624 17:25:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.624 17:25:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.624 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.624 17:25:09 -- host/discovery.sh@59 -- # sort 00:21:39.624 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.624 17:25:09 -- host/discovery.sh@59 -- # xargs 00:21:39.624 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.624 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.624 17:25:09 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.624 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:39.624 17:25:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.624 17:25:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.624 17:25:09 -- host/discovery.sh@55 -- # sort 00:21:39.624 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.624 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.624 17:25:09 -- host/discovery.sh@55 -- # xargs 00:21:39.624 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:39.624 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.624 17:25:09 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.624 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:39.624 17:25:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:39.624 17:25:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.624 17:25:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.624 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.624 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.624 17:25:09 -- host/discovery.sh@63 -- # sort -n 00:21:39.624 17:25:09 -- host/discovery.sh@63 -- # xargs 00:21:39.625 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.625 17:25:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:21:39.625 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.625 17:25:09 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:39.625 17:25:09 -- host/discovery.sh@79 -- # expected_count=1 00:21:39.625 17:25:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.625 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.625 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.625 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.625 17:25:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.625 17:25:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:39.625 17:25:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:39.625 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.625 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.625 17:25:09 -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.625 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- host/discovery.sh@74 -- # notification_count=1 00:21:39.884 17:25:09 -- host/discovery.sh@75 -- # notify_id=1 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.884 17:25:09 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.884 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # sort 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # xargs 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.884 17:25:09 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:39.884 17:25:09 -- host/discovery.sh@79 -- # expected_count=1 00:21:39.884 17:25:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:39.884 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:39.884 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.884 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:39.884 17:25:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:39.884 17:25:09 -- host/discovery.sh@74 -- # jq '. | length' 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- host/discovery.sh@74 -- # notification_count=1 00:21:39.884 17:25:09 -- host/discovery.sh@75 -- # notify_id=2 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.884 17:25:09 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 [2024-04-25 17:25:09.733495] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:39.884 [2024-04-25 17:25:09.734248] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:39.884 [2024-04-25 17:25:09.734295] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.884 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:39.884 17:25:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:39.884 17:25:09 -- host/discovery.sh@59 -- # sort 00:21:39.884 17:25:09 -- host/discovery.sh@59 -- # xargs 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.884 17:25:09 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.884 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # sort 00:21:39.884 17:25:09 -- host/discovery.sh@55 -- # xargs 00:21:39.884 [2024-04-25 17:25:09.820350] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:39.884 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:39.884 17:25:09 -- common/autotest_common.sh@904 -- # return 0 00:21:39.884 17:25:09 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:39.884 17:25:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:39.884 17:25:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:39.884 17:25:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:39.884 17:25:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:39.884 17:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.884 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.884 17:25:09 -- host/discovery.sh@63 -- # xargs 00:21:39.885 17:25:09 -- host/discovery.sh@63 -- # sort -n 00:21:40.144 17:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.144 [2024-04-25 17:25:09.883577] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.144 [2024-04-25 17:25:09.883600] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:40.144 [2024-04-25 17:25:09.883622] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:40.144 17:25:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:40.144 17:25:09 -- common/autotest_common.sh@906 -- # sleep 1 00:21:41.079 17:25:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.079 17:25:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:41.079 17:25:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:41.079 17:25:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:41.079 17:25:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:41.079 17:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.079 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 17:25:10 -- host/discovery.sh@63 -- # xargs 00:21:41.079 17:25:10 -- host/discovery.sh@63 -- # sort -n 00:21:41.079 17:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.079 17:25:10 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:41.079 17:25:10 -- common/autotest_common.sh@904 -- # return 0 00:21:41.079 17:25:10 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:41.079 17:25:10 -- host/discovery.sh@79 -- # expected_count=0 00:21:41.079 17:25:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:41.079 17:25:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:41.079 17:25:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.079 17:25:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.079 17:25:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:41.079 17:25:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:41.079 17:25:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:41.079 17:25:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.079 17:25:10 -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 17:25:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:41.079 17:25:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.079 17:25:11 -- host/discovery.sh@74 -- # notification_count=0 00:21:41.079 17:25:11 -- host/discovery.sh@75 -- # notify_id=2 00:21:41.079 17:25:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:41.079 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.079 17:25:11 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:41.079 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.079 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 [2024-04-25 17:25:11.034524] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:41.079 [2024-04-25 17:25:11.034575] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:41.079 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.079 17:25:11 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:41.079 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:41.079 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.079 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.079 17:25:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:41.079 [2024-04-25 17:25:11.040749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.079 17:25:11 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:41.079 [2024-04-25 17:25:11.040811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.079 [2024-04-25 17:25:11.040824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.079 [2024-04-25 17:25:11.040833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.079 [2024-04-25 17:25:11.040843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.079 [2024-04-25 17:25:11.040851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.079 [2024-04-25 17:25:11.040861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.079 [2024-04-25 17:25:11.040870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.079 [2024-04-25 17:25:11.040879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.079 17:25:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:41.079 17:25:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:41.079 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.079 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.079 17:25:11 -- host/discovery.sh@59 -- # sort 00:21:41.079 17:25:11 -- host/discovery.sh@59 -- # xargs 00:21:41.079 [2024-04-25 17:25:11.050687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.339 [2024-04-25 17:25:11.060706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.060873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.060920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.060942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.060951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.060967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.060981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.060989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.060999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.061029] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 [2024-04-25 17:25:11.070821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.070924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.070965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.070980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.070989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.071003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.071016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.071023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.071031] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.071044] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 [2024-04-25 17:25:11.080898] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.081007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.081050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.081080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.081089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.081103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.081125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.081134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.081142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.081171] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 [2024-04-25 17:25:11.090960] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.091080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.091121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.091136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.091146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.091159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.091172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.091180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.091187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.091200] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.339 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.339 17:25:11 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.339 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:41.339 17:25:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.339 [2024-04-25 17:25:11.101037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.101138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.101198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.339 [2024-04-25 17:25:11.101213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.101239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.101255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.101278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.101289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.101298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.101313] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.339 17:25:11 -- host/discovery.sh@55 -- # sort 00:21:41.339 17:25:11 -- host/discovery.sh@55 -- # xargs 00:21:41.339 17:25:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.339 [2024-04-25 17:25:11.111112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:41.339 [2024-04-25 17:25:11.111265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.111309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.339 [2024-04-25 17:25:11.111324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x90e310 with addr=10.0.0.2, port=4420 00:21:41.339 [2024-04-25 17:25:11.111334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x90e310 is same with the state(5) to be set 00:21:41.339 [2024-04-25 17:25:11.111348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90e310 (9): Bad file descriptor 00:21:41.339 [2024-04-25 17:25:11.111361] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:41.339 [2024-04-25 17:25:11.111370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:41.339 [2024-04-25 17:25:11.111378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:41.339 [2024-04-25 17:25:11.111391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.339 [2024-04-25 17:25:11.119993] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:41.339 [2024-04-25 17:25:11.120040] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:41.339 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:41.339 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.339 17:25:11 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.339 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:41.339 17:25:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:41.339 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.339 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.339 17:25:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:41.339 17:25:11 -- host/discovery.sh@63 -- # xargs 00:21:41.339 17:25:11 -- host/discovery.sh@63 -- # sort -n 00:21:41.339 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.339 17:25:11 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:21:41.339 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.339 17:25:11 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:41.339 17:25:11 -- host/discovery.sh@79 -- # expected_count=0 00:21:41.339 17:25:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:41.339 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:41.339 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.340 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.340 17:25:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:41.340 17:25:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:41.340 17:25:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:41.340 17:25:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:41.340 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.340 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.340 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.340 17:25:11 -- host/discovery.sh@74 -- # notification_count=0 00:21:41.340 17:25:11 -- host/discovery.sh@75 -- # notify_id=2 00:21:41.340 17:25:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:41.340 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.340 17:25:11 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:41.340 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.340 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.340 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.340 17:25:11 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:41.340 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:41.340 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.340 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.340 17:25:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:41.340 17:25:11 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:41.340 17:25:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:41.340 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.340 17:25:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:41.340 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.340 17:25:11 -- host/discovery.sh@59 -- # sort 00:21:41.340 17:25:11 -- host/discovery.sh@59 -- # xargs 00:21:41.340 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:41.599 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.599 17:25:11 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:41.599 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:41.599 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.599 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:41.599 17:25:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.599 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.599 17:25:11 -- host/discovery.sh@55 -- # sort 00:21:41.599 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.599 17:25:11 -- host/discovery.sh@55 -- # xargs 00:21:41.599 17:25:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:41.599 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:41.599 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.599 17:25:11 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:41.599 17:25:11 -- host/discovery.sh@79 -- # expected_count=2 00:21:41.599 17:25:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:41.599 17:25:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:41.599 17:25:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:41.599 17:25:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:41.599 17:25:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:41.599 17:25:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:41.599 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.599 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.599 17:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.599 17:25:11 -- host/discovery.sh@74 -- # notification_count=2 00:21:41.599 17:25:11 -- host/discovery.sh@75 -- # notify_id=4 00:21:41.599 17:25:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:41.599 17:25:11 -- common/autotest_common.sh@904 -- # return 0 00:21:41.599 17:25:11 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:41.599 17:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:41.599 17:25:11 -- common/autotest_common.sh@10 -- # set +x 00:21:42.536 [2024-04-25 17:25:12.471492] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:42.536 [2024-04-25 17:25:12.471515] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:42.536 [2024-04-25 17:25:12.471546] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:42.795 [2024-04-25 17:25:12.557589] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:42.795 [2024-04-25 17:25:12.616547] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:42.795 [2024-04-25 17:25:12.616602] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:42.795 17:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.795 17:25:12 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:42.795 17:25:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.795 17:25:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.795 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.795 2024/04/25 17:25:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:42.795 request: 00:21:42.795 { 00:21:42.795 "method": "bdev_nvme_start_discovery", 00:21:42.795 "params": { 00:21:42.795 "name": "nvme", 00:21:42.795 "trtype": "tcp", 00:21:42.795 "traddr": "10.0.0.2", 00:21:42.795 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.795 "adrfam": "ipv4", 00:21:42.795 "trsvcid": "8009", 00:21:42.795 "wait_for_attach": true 00:21:42.795 } 00:21:42.795 } 00:21:42.795 Got JSON-RPC error response 00:21:42.795 GoRPCClient: error on JSON-RPC call 00:21:42.795 17:25:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:42.795 17:25:12 -- common/autotest_common.sh@641 -- # es=1 00:21:42.795 17:25:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:42.795 17:25:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:42.795 17:25:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:42.795 17:25:12 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.795 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.795 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # xargs 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # sort 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.795 17:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.795 17:25:12 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:42.795 17:25:12 -- host/discovery.sh@146 -- # get_bdev_list 00:21:42.795 17:25:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.795 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.795 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.795 17:25:12 -- host/discovery.sh@55 -- # sort 00:21:42.795 17:25:12 -- host/discovery.sh@55 -- # xargs 00:21:42.795 17:25:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.795 17:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.795 17:25:12 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:42.795 17:25:12 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:42.795 17:25:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:42.795 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.795 17:25:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:42.795 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.795 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.795 2024/04/25 17:25:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:42.795 request: 00:21:42.795 { 00:21:42.795 "method": "bdev_nvme_start_discovery", 00:21:42.795 "params": { 00:21:42.795 "name": "nvme_second", 00:21:42.795 "trtype": "tcp", 00:21:42.795 "traddr": "10.0.0.2", 00:21:42.795 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.795 "adrfam": "ipv4", 00:21:42.795 "trsvcid": "8009", 00:21:42.795 "wait_for_attach": true 00:21:42.795 } 00:21:42.795 } 00:21:42.795 Got JSON-RPC error response 00:21:42.795 GoRPCClient: error on JSON-RPC call 00:21:42.795 17:25:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:42.795 17:25:12 -- common/autotest_common.sh@641 -- # es=1 00:21:42.795 17:25:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:42.795 17:25:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:42.795 17:25:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:42.795 17:25:12 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.795 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.795 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # sort 00:21:42.795 17:25:12 -- host/discovery.sh@67 -- # xargs 00:21:43.053 17:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.053 17:25:12 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:43.053 17:25:12 -- host/discovery.sh@152 -- # get_bdev_list 00:21:43.053 17:25:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.053 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.053 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:43.053 17:25:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.053 17:25:12 -- host/discovery.sh@55 -- # sort 00:21:43.053 17:25:12 -- host/discovery.sh@55 -- # xargs 00:21:43.053 17:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.053 17:25:12 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.054 17:25:12 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.054 17:25:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:43.054 17:25:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.054 17:25:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:43.054 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.054 17:25:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:43.054 17:25:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.054 17:25:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:43.054 17:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.054 17:25:12 -- common/autotest_common.sh@10 -- # set +x 00:21:44.002 [2024-04-25 17:25:13.890736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.002 [2024-04-25 17:25:13.890859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.002 [2024-04-25 17:25:13.890880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x961e20 with addr=10.0.0.2, port=8010 00:21:44.002 [2024-04-25 17:25:13.890898] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:44.002 [2024-04-25 17:25:13.890908] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:44.002 [2024-04-25 17:25:13.890917] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:44.975 [2024-04-25 17:25:14.890732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.975 [2024-04-25 17:25:14.890812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.975 [2024-04-25 17:25:14.890829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x961e20 with addr=10.0.0.2, port=8010 00:21:44.975 [2024-04-25 17:25:14.890842] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:44.975 [2024-04-25 17:25:14.890850] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:44.976 [2024-04-25 17:25:14.890859] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:46.354 [2024-04-25 17:25:15.890629] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:46.354 2024/04/25 17:25:15 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:46.354 request: 00:21:46.354 { 00:21:46.354 "method": "bdev_nvme_start_discovery", 00:21:46.354 "params": { 00:21:46.354 "name": "nvme_second", 00:21:46.354 "trtype": "tcp", 00:21:46.354 "traddr": "10.0.0.2", 00:21:46.354 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:46.354 "adrfam": "ipv4", 00:21:46.354 "trsvcid": "8010", 00:21:46.354 "attach_timeout_ms": 3000 00:21:46.354 } 00:21:46.354 } 00:21:46.354 Got JSON-RPC error response 00:21:46.354 GoRPCClient: error on JSON-RPC call 00:21:46.354 17:25:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:46.354 17:25:15 -- common/autotest_common.sh@641 -- # es=1 00:21:46.354 17:25:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:46.354 17:25:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:46.354 17:25:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:46.354 17:25:15 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:46.354 17:25:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:46.354 17:25:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:46.354 17:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:46.354 17:25:15 -- host/discovery.sh@67 -- # sort 00:21:46.354 17:25:15 -- common/autotest_common.sh@10 -- # set +x 00:21:46.354 17:25:15 -- host/discovery.sh@67 -- # xargs 00:21:46.354 17:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:46.354 17:25:15 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:46.354 17:25:15 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:46.354 17:25:15 -- host/discovery.sh@161 -- # kill 89309 00:21:46.354 17:25:15 -- host/discovery.sh@162 -- # nvmftestfini 00:21:46.354 17:25:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:46.354 17:25:15 -- nvmf/common.sh@117 -- # sync 00:21:46.354 17:25:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.354 17:25:15 -- nvmf/common.sh@120 -- # set +e 00:21:46.354 17:25:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.354 17:25:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.354 rmmod nvme_tcp 00:21:46.354 rmmod nvme_fabrics 00:21:46.354 rmmod nvme_keyring 00:21:46.354 17:25:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.354 17:25:16 -- nvmf/common.sh@124 -- # set -e 00:21:46.354 17:25:16 -- nvmf/common.sh@125 -- # return 0 00:21:46.354 17:25:16 -- nvmf/common.sh@478 -- # '[' -n 89272 ']' 00:21:46.354 17:25:16 -- nvmf/common.sh@479 -- # killprocess 89272 00:21:46.354 17:25:16 -- common/autotest_common.sh@936 -- # '[' -z 89272 ']' 00:21:46.354 17:25:16 -- common/autotest_common.sh@940 -- # kill -0 89272 00:21:46.354 17:25:16 -- common/autotest_common.sh@941 -- # uname 00:21:46.354 17:25:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.354 17:25:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89272 00:21:46.354 17:25:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:46.354 17:25:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:46.354 killing process with pid 89272 00:21:46.354 17:25:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89272' 00:21:46.354 17:25:16 -- common/autotest_common.sh@955 -- # kill 89272 00:21:46.354 17:25:16 -- common/autotest_common.sh@960 -- # wait 89272 00:21:46.354 17:25:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:46.354 17:25:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:46.354 17:25:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:46.354 17:25:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.354 17:25:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.354 17:25:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.354 17:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.354 17:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.354 17:25:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:46.354 00:21:46.354 real 0m9.696s 00:21:46.354 user 0m19.622s 00:21:46.354 sys 0m1.455s 00:21:46.354 17:25:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:46.354 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.354 ************************************ 00:21:46.354 END TEST nvmf_discovery 00:21:46.354 ************************************ 00:21:46.354 17:25:16 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:46.354 17:25:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:46.354 17:25:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:46.354 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 ************************************ 00:21:46.614 START TEST nvmf_discovery_remove_ifc 00:21:46.614 ************************************ 00:21:46.614 17:25:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:46.614 * Looking for test storage... 00:21:46.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.614 17:25:16 -- nvmf/common.sh@7 -- # uname -s 00:21:46.614 17:25:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.614 17:25:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.614 17:25:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.614 17:25:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.614 17:25:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.614 17:25:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.614 17:25:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.614 17:25:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.614 17:25:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.614 17:25:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:46.614 17:25:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:46.614 17:25:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.614 17:25:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.614 17:25:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.614 17:25:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.614 17:25:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.614 17:25:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.614 17:25:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.614 17:25:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.614 17:25:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.614 17:25:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.614 17:25:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.614 17:25:16 -- paths/export.sh@5 -- # export PATH 00:21:46.614 17:25:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.614 17:25:16 -- nvmf/common.sh@47 -- # : 0 00:21:46.614 17:25:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.614 17:25:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.614 17:25:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.614 17:25:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.614 17:25:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.614 17:25:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.614 17:25:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.614 17:25:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:46.614 17:25:16 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:46.614 17:25:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:46.614 17:25:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.614 17:25:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:46.614 17:25:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:46.614 17:25:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:46.614 17:25:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.614 17:25:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.614 17:25:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.614 17:25:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:46.614 17:25:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:46.614 17:25:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.614 17:25:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.614 17:25:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.614 17:25:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:46.614 17:25:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.614 17:25:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.614 17:25:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.614 17:25:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.614 17:25:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.614 17:25:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.614 17:25:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.614 17:25:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.614 17:25:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:46.614 17:25:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:46.614 Cannot find device "nvmf_tgt_br" 00:21:46.614 17:25:16 -- nvmf/common.sh@155 -- # true 00:21:46.614 17:25:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.614 Cannot find device "nvmf_tgt_br2" 00:21:46.614 17:25:16 -- nvmf/common.sh@156 -- # true 00:21:46.614 17:25:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:46.614 17:25:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:46.614 Cannot find device "nvmf_tgt_br" 00:21:46.614 17:25:16 -- nvmf/common.sh@158 -- # true 00:21:46.614 17:25:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:46.614 Cannot find device "nvmf_tgt_br2" 00:21:46.614 17:25:16 -- nvmf/common.sh@159 -- # true 00:21:46.614 17:25:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:46.614 17:25:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:46.874 17:25:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.874 17:25:16 -- nvmf/common.sh@162 -- # true 00:21:46.874 17:25:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.874 17:25:16 -- nvmf/common.sh@163 -- # true 00:21:46.874 17:25:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.874 17:25:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.874 17:25:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.874 17:25:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.874 17:25:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.874 17:25:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.874 17:25:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.874 17:25:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.874 17:25:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.874 17:25:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:46.874 17:25:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:46.874 17:25:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:46.874 17:25:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:46.874 17:25:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.874 17:25:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.874 17:25:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.874 17:25:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:46.874 17:25:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:46.874 17:25:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.874 17:25:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.874 17:25:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.874 17:25:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.874 17:25:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.874 17:25:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:46.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:21:46.874 00:21:46.874 --- 10.0.0.2 ping statistics --- 00:21:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.874 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:46.874 17:25:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:46.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:21:46.874 00:21:46.874 --- 10.0.0.3 ping statistics --- 00:21:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.874 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:46.874 17:25:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:46.874 00:21:46.874 --- 10.0.0.1 ping statistics --- 00:21:46.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.874 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:46.874 17:25:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.874 17:25:16 -- nvmf/common.sh@422 -- # return 0 00:21:46.874 17:25:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:46.874 17:25:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.874 17:25:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:46.874 17:25:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:46.874 17:25:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.874 17:25:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:46.874 17:25:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:46.874 17:25:16 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:46.874 17:25:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:46.874 17:25:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:46.874 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.874 17:25:16 -- nvmf/common.sh@470 -- # nvmfpid=89775 00:21:46.874 17:25:16 -- nvmf/common.sh@471 -- # waitforlisten 89775 00:21:46.874 17:25:16 -- common/autotest_common.sh@817 -- # '[' -z 89775 ']' 00:21:46.874 17:25:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.874 17:25:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.874 17:25:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:46.874 17:25:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.874 17:25:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:46.874 17:25:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.134 [2024-04-25 17:25:16.888073] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:47.134 [2024-04-25 17:25:16.888158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.134 [2024-04-25 17:25:17.025278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.134 [2024-04-25 17:25:17.075390] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.134 [2024-04-25 17:25:17.075450] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.134 [2024-04-25 17:25:17.075459] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.134 [2024-04-25 17:25:17.075466] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.134 [2024-04-25 17:25:17.075472] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.134 [2024-04-25 17:25:17.075504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.394 17:25:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.394 17:25:17 -- common/autotest_common.sh@850 -- # return 0 00:21:47.394 17:25:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.394 17:25:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.394 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.394 17:25:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.394 17:25:17 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:47.394 17:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.394 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.394 [2024-04-25 17:25:17.212239] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.394 [2024-04-25 17:25:17.220390] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:47.394 null0 00:21:47.394 [2024-04-25 17:25:17.256376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.394 17:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.394 17:25:17 -- host/discovery_remove_ifc.sh@59 -- # hostpid=89812 00:21:47.394 17:25:17 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:47.394 17:25:17 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 89812 /tmp/host.sock 00:21:47.394 17:25:17 -- common/autotest_common.sh@817 -- # '[' -z 89812 ']' 00:21:47.394 17:25:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:47.394 17:25:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.394 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:47.394 17:25:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:47.394 17:25:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.394 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.394 [2024-04-25 17:25:17.322954] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:21:47.395 [2024-04-25 17:25:17.323022] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89812 ] 00:21:47.653 [2024-04-25 17:25:17.453624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.653 [2024-04-25 17:25:17.506671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.653 17:25:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.653 17:25:17 -- common/autotest_common.sh@850 -- # return 0 00:21:47.653 17:25:17 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.653 17:25:17 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:47.653 17:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.653 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.653 17:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.653 17:25:17 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:47.653 17:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.653 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.911 17:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:47.911 17:25:17 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:47.911 17:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:47.911 17:25:17 -- common/autotest_common.sh@10 -- # set +x 00:21:48.847 [2024-04-25 17:25:18.644916] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.847 [2024-04-25 17:25:18.644938] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.847 [2024-04-25 17:25:18.644969] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.847 [2024-04-25 17:25:18.731025] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:48.847 [2024-04-25 17:25:18.786355] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:48.847 [2024-04-25 17:25:18.786423] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:48.847 [2024-04-25 17:25:18.786445] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:48.847 [2024-04-25 17:25:18.786458] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.847 [2024-04-25 17:25:18.786476] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.847 17:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.847 17:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:48.847 17:25:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.847 [2024-04-25 17:25:18.793687] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x605d60 was disconnected and freed. delete nvme_qpair. 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.847 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.847 17:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.106 17:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.106 17:25:18 -- common/autotest_common.sh@10 -- # set +x 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.106 17:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:49.106 17:25:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.042 17:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.042 17:25:19 -- common/autotest_common.sh@10 -- # set +x 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:50.042 17:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:50.042 17:25:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:51.418 17:25:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.418 17:25:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.418 17:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:51.418 17:25:20 -- common/autotest_common.sh@10 -- # set +x 00:21:51.418 17:25:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.418 17:25:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.418 17:25:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.418 17:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:51.418 17:25:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:51.418 17:25:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.355 17:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:52.355 17:25:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:52.355 17:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:52.355 17:25:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:53.291 17:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:53.291 17:25:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:53.291 17:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:53.291 17:25:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:54.226 17:25:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:54.226 17:25:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:54.226 17:25:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:54.226 17:25:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:54.226 17:25:24 -- common/autotest_common.sh@10 -- # set +x 00:21:54.226 17:25:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:54.226 17:25:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:54.484 17:25:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:54.485 [2024-04-25 17:25:24.224835] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:54.485 [2024-04-25 17:25:24.224939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.485 [2024-04-25 17:25:24.224959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.485 [2024-04-25 17:25:24.224978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.485 [2024-04-25 17:25:24.224993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.485 [2024-04-25 17:25:24.225002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.485 [2024-04-25 17:25:24.225011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.485 [2024-04-25 17:25:24.225021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.485 [2024-04-25 17:25:24.225029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.485 [2024-04-25 17:25:24.225038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.485 [2024-04-25 17:25:24.225046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.485 [2024-04-25 17:25:24.225055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cf360 is same with the state(5) to be set 00:21:54.485 [2024-04-25 17:25:24.234831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cf360 (9): Bad file descriptor 00:21:54.485 [2024-04-25 17:25:24.244856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:54.485 17:25:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:54.485 17:25:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:55.421 17:25:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:55.421 17:25:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.421 17:25:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:55.421 17:25:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:55.421 17:25:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:55.421 17:25:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.421 17:25:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.421 [2024-04-25 17:25:25.268816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:56.359 [2024-04-25 17:25:26.292807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:56.359 [2024-04-25 17:25:26.292922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cf360 with addr=10.0.0.2, port=4420 00:21:56.359 [2024-04-25 17:25:26.292956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cf360 is same with the state(5) to be set 00:21:56.359 [2024-04-25 17:25:26.293836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cf360 (9): Bad file descriptor 00:21:56.359 [2024-04-25 17:25:26.293923] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:56.359 [2024-04-25 17:25:26.294001] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:56.359 [2024-04-25 17:25:26.294088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.359 [2024-04-25 17:25:26.294126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.359 [2024-04-25 17:25:26.294152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.359 [2024-04-25 17:25:26.294173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.359 [2024-04-25 17:25:26.294194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.359 [2024-04-25 17:25:26.294214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.359 [2024-04-25 17:25:26.294235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.359 [2024-04-25 17:25:26.294255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.359 [2024-04-25 17:25:26.294277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.359 [2024-04-25 17:25:26.294298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.359 [2024-04-25 17:25:26.294318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:56.359 [2024-04-25 17:25:26.294382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5768e0 (9): Bad file descriptor 00:21:56.359 [2024-04-25 17:25:26.295382] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:56.359 [2024-04-25 17:25:26.295435] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:56.359 17:25:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.359 17:25:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:56.359 17:25:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.736 17:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:57.736 17:25:27 -- common/autotest_common.sh@10 -- # set +x 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:57.736 17:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:57.736 17:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:57.736 17:25:27 -- common/autotest_common.sh@10 -- # set +x 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:57.736 17:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:57.736 17:25:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:58.673 [2024-04-25 17:25:28.307197] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:58.673 [2024-04-25 17:25:28.307220] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:58.673 [2024-04-25 17:25:28.307253] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:58.673 [2024-04-25 17:25:28.393344] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:58.673 [2024-04-25 17:25:28.448294] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:58.673 [2024-04-25 17:25:28.448360] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:58.673 [2024-04-25 17:25:28.448399] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:58.673 [2024-04-25 17:25:28.448430] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:58.673 [2024-04-25 17:25:28.448442] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:58.673 [2024-04-25 17:25:28.455839] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x5bfe10 was disconnected and freed. delete nvme_qpair. 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:58.673 17:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.673 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:58.673 17:25:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:58.673 17:25:28 -- host/discovery_remove_ifc.sh@90 -- # killprocess 89812 00:21:58.673 17:25:28 -- common/autotest_common.sh@936 -- # '[' -z 89812 ']' 00:21:58.673 17:25:28 -- common/autotest_common.sh@940 -- # kill -0 89812 00:21:58.673 17:25:28 -- common/autotest_common.sh@941 -- # uname 00:21:58.673 17:25:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.673 17:25:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89812 00:21:58.673 killing process with pid 89812 00:21:58.673 17:25:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:58.673 17:25:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:58.673 17:25:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89812' 00:21:58.673 17:25:28 -- common/autotest_common.sh@955 -- # kill 89812 00:21:58.673 17:25:28 -- common/autotest_common.sh@960 -- # wait 89812 00:21:58.932 17:25:28 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:58.932 17:25:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:58.932 17:25:28 -- nvmf/common.sh@117 -- # sync 00:21:58.932 17:25:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.932 17:25:28 -- nvmf/common.sh@120 -- # set +e 00:21:58.932 17:25:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.932 17:25:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.932 rmmod nvme_tcp 00:21:58.932 rmmod nvme_fabrics 00:21:58.932 rmmod nvme_keyring 00:21:58.932 17:25:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.932 17:25:28 -- nvmf/common.sh@124 -- # set -e 00:21:58.932 17:25:28 -- nvmf/common.sh@125 -- # return 0 00:21:58.932 17:25:28 -- nvmf/common.sh@478 -- # '[' -n 89775 ']' 00:21:58.932 17:25:28 -- nvmf/common.sh@479 -- # killprocess 89775 00:21:58.932 17:25:28 -- common/autotest_common.sh@936 -- # '[' -z 89775 ']' 00:21:58.932 17:25:28 -- common/autotest_common.sh@940 -- # kill -0 89775 00:21:58.932 17:25:28 -- common/autotest_common.sh@941 -- # uname 00:21:58.932 17:25:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.932 17:25:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89775 00:21:58.932 killing process with pid 89775 00:21:58.932 17:25:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:58.932 17:25:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:58.932 17:25:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89775' 00:21:58.932 17:25:28 -- common/autotest_common.sh@955 -- # kill 89775 00:21:58.932 17:25:28 -- common/autotest_common.sh@960 -- # wait 89775 00:21:59.191 17:25:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:59.191 17:25:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:59.191 17:25:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:59.192 17:25:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.192 17:25:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.192 17:25:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.192 17:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.192 17:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.192 17:25:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.192 00:21:59.192 real 0m12.642s 00:21:59.192 user 0m21.816s 00:21:59.192 sys 0m1.412s 00:21:59.192 ************************************ 00:21:59.192 END TEST nvmf_discovery_remove_ifc 00:21:59.192 ************************************ 00:21:59.192 17:25:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:59.192 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:21:59.192 17:25:29 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:59.192 17:25:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:59.192 17:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:59.192 17:25:29 -- common/autotest_common.sh@10 -- # set +x 00:21:59.192 ************************************ 00:21:59.192 START TEST nvmf_identify_kernel_target 00:21:59.192 ************************************ 00:21:59.192 17:25:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:59.451 * Looking for test storage... 00:21:59.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.451 17:25:29 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.451 17:25:29 -- nvmf/common.sh@7 -- # uname -s 00:21:59.451 17:25:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.451 17:25:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.451 17:25:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.451 17:25:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.451 17:25:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.451 17:25:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.451 17:25:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.451 17:25:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.451 17:25:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.451 17:25:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:59.451 17:25:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:21:59.451 17:25:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.451 17:25:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.451 17:25:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.451 17:25:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.451 17:25:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.451 17:25:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.451 17:25:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.451 17:25:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.451 17:25:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 17:25:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 17:25:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 17:25:29 -- paths/export.sh@5 -- # export PATH 00:21:59.451 17:25:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.451 17:25:29 -- nvmf/common.sh@47 -- # : 0 00:21:59.451 17:25:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.451 17:25:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.451 17:25:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.451 17:25:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.451 17:25:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.451 17:25:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.451 17:25:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.451 17:25:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.451 17:25:29 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:59.451 17:25:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:59.451 17:25:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.451 17:25:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:59.451 17:25:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:59.451 17:25:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:59.451 17:25:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.451 17:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.451 17:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.451 17:25:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:59.451 17:25:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:59.451 17:25:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.451 17:25:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.451 17:25:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.451 17:25:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:59.451 17:25:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.451 17:25:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.451 17:25:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.451 17:25:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.451 17:25:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.451 17:25:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.451 17:25:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.451 17:25:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.451 17:25:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:59.451 17:25:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:59.451 Cannot find device "nvmf_tgt_br" 00:21:59.451 17:25:29 -- nvmf/common.sh@155 -- # true 00:21:59.451 17:25:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.451 Cannot find device "nvmf_tgt_br2" 00:21:59.451 17:25:29 -- nvmf/common.sh@156 -- # true 00:21:59.451 17:25:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:59.451 17:25:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:59.451 Cannot find device "nvmf_tgt_br" 00:21:59.452 17:25:29 -- nvmf/common.sh@158 -- # true 00:21:59.452 17:25:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:59.452 Cannot find device "nvmf_tgt_br2" 00:21:59.452 17:25:29 -- nvmf/common.sh@159 -- # true 00:21:59.452 17:25:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:59.452 17:25:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:59.452 17:25:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:59.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.452 17:25:29 -- nvmf/common.sh@162 -- # true 00:21:59.452 17:25:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:59.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:59.452 17:25:29 -- nvmf/common.sh@163 -- # true 00:21:59.452 17:25:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:59.452 17:25:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:59.452 17:25:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:59.452 17:25:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:59.452 17:25:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:59.452 17:25:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:59.711 17:25:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:59.711 17:25:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:59.711 17:25:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:59.711 17:25:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:59.711 17:25:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:59.711 17:25:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:59.711 17:25:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:59.711 17:25:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:59.711 17:25:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:59.711 17:25:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:59.711 17:25:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:59.711 17:25:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:59.711 17:25:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:59.711 17:25:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:59.711 17:25:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:59.711 17:25:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:59.711 17:25:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:59.711 17:25:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:59.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:59.711 00:21:59.711 --- 10.0.0.2 ping statistics --- 00:21:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.711 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:59.711 17:25:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:59.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:59.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:59.711 00:21:59.711 --- 10.0.0.3 ping statistics --- 00:21:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.711 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:59.711 17:25:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:59.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:59.711 00:21:59.711 --- 10.0.0.1 ping statistics --- 00:21:59.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.711 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:59.711 17:25:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.711 17:25:29 -- nvmf/common.sh@422 -- # return 0 00:21:59.711 17:25:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:59.711 17:25:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.711 17:25:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:59.711 17:25:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:59.711 17:25:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.711 17:25:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:59.711 17:25:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:59.711 17:25:29 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:59.711 17:25:29 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:59.711 17:25:29 -- nvmf/common.sh@717 -- # local ip 00:21:59.711 17:25:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:59.711 17:25:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:59.711 17:25:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:59.711 17:25:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:59.711 17:25:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:59.711 17:25:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:59.711 17:25:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:59.711 17:25:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:59.711 17:25:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:59.711 17:25:29 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:59.711 17:25:29 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:59.711 17:25:29 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:59.711 17:25:29 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:59.711 17:25:29 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:59.711 17:25:29 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:59.712 17:25:29 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:59.712 17:25:29 -- nvmf/common.sh@628 -- # local block nvme 00:21:59.712 17:25:29 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:59.712 17:25:29 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:59.712 17:25:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:59.712 17:25:29 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:59.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:59.970 Waiting for block devices as requested 00:22:00.229 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.229 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.229 17:25:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:00.229 17:25:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:00.229 17:25:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:00.229 17:25:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:00.229 17:25:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:00.229 17:25:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:00.229 17:25:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:00.229 17:25:30 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:00.229 17:25:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:00.489 No valid GPT data, bailing 00:22:00.489 17:25:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:00.489 17:25:30 -- scripts/common.sh@391 -- # pt= 00:22:00.489 17:25:30 -- scripts/common.sh@392 -- # return 1 00:22:00.489 17:25:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:00.489 17:25:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:00.489 17:25:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:00.489 17:25:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:22:00.489 17:25:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:00.489 17:25:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:00.489 17:25:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:00.489 17:25:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:22:00.489 17:25:30 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:00.489 17:25:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:00.489 No valid GPT data, bailing 00:22:00.489 17:25:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:00.489 17:25:30 -- scripts/common.sh@391 -- # pt= 00:22:00.489 17:25:30 -- scripts/common.sh@392 -- # return 1 00:22:00.489 17:25:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:22:00.489 17:25:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:00.489 17:25:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:00.489 17:25:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:22:00.489 17:25:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:00.489 17:25:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:00.489 17:25:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:00.489 17:25:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:22:00.489 17:25:30 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:00.489 17:25:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:00.489 No valid GPT data, bailing 00:22:00.490 17:25:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:00.490 17:25:30 -- scripts/common.sh@391 -- # pt= 00:22:00.490 17:25:30 -- scripts/common.sh@392 -- # return 1 00:22:00.490 17:25:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:22:00.490 17:25:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:00.490 17:25:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:00.490 17:25:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:22:00.490 17:25:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:00.490 17:25:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:00.490 17:25:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:00.490 17:25:30 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:22:00.490 17:25:30 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:00.490 17:25:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:00.490 No valid GPT data, bailing 00:22:00.490 17:25:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:00.490 17:25:30 -- scripts/common.sh@391 -- # pt= 00:22:00.490 17:25:30 -- scripts/common.sh@392 -- # return 1 00:22:00.490 17:25:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:22:00.490 17:25:30 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:22:00.490 17:25:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:00.490 17:25:30 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:00.490 17:25:30 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:00.490 17:25:30 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:00.490 17:25:30 -- nvmf/common.sh@656 -- # echo 1 00:22:00.490 17:25:30 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:22:00.490 17:25:30 -- nvmf/common.sh@658 -- # echo 1 00:22:00.490 17:25:30 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:00.490 17:25:30 -- nvmf/common.sh@661 -- # echo tcp 00:22:00.490 17:25:30 -- nvmf/common.sh@662 -- # echo 4420 00:22:00.490 17:25:30 -- nvmf/common.sh@663 -- # echo ipv4 00:22:00.490 17:25:30 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:00.750 17:25:30 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.1 -t tcp -s 4420 00:22:00.750 00:22:00.750 Discovery Log Number of Records 2, Generation counter 2 00:22:00.750 =====Discovery Log Entry 0====== 00:22:00.750 trtype: tcp 00:22:00.750 adrfam: ipv4 00:22:00.750 subtype: current discovery subsystem 00:22:00.750 treq: not specified, sq flow control disable supported 00:22:00.750 portid: 1 00:22:00.750 trsvcid: 4420 00:22:00.750 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:00.750 traddr: 10.0.0.1 00:22:00.750 eflags: none 00:22:00.750 sectype: none 00:22:00.750 =====Discovery Log Entry 1====== 00:22:00.750 trtype: tcp 00:22:00.750 adrfam: ipv4 00:22:00.750 subtype: nvme subsystem 00:22:00.750 treq: not specified, sq flow control disable supported 00:22:00.750 portid: 1 00:22:00.750 trsvcid: 4420 00:22:00.750 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:00.750 traddr: 10.0.0.1 00:22:00.750 eflags: none 00:22:00.750 sectype: none 00:22:00.750 17:25:30 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:00.750 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:00.750 ===================================================== 00:22:00.750 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:00.750 ===================================================== 00:22:00.750 Controller Capabilities/Features 00:22:00.750 ================================ 00:22:00.750 Vendor ID: 0000 00:22:00.750 Subsystem Vendor ID: 0000 00:22:00.750 Serial Number: ab66ac968db5bd9f4092 00:22:00.750 Model Number: Linux 00:22:00.750 Firmware Version: 6.7.0-68 00:22:00.750 Recommended Arb Burst: 0 00:22:00.750 IEEE OUI Identifier: 00 00 00 00:22:00.750 Multi-path I/O 00:22:00.750 May have multiple subsystem ports: No 00:22:00.750 May have multiple controllers: No 00:22:00.750 Associated with SR-IOV VF: No 00:22:00.750 Max Data Transfer Size: Unlimited 00:22:00.750 Max Number of Namespaces: 0 00:22:00.750 Max Number of I/O Queues: 1024 00:22:00.750 NVMe Specification Version (VS): 1.3 00:22:00.750 NVMe Specification Version (Identify): 1.3 00:22:00.750 Maximum Queue Entries: 1024 00:22:00.750 Contiguous Queues Required: No 00:22:00.750 Arbitration Mechanisms Supported 00:22:00.750 Weighted Round Robin: Not Supported 00:22:00.750 Vendor Specific: Not Supported 00:22:00.750 Reset Timeout: 7500 ms 00:22:00.750 Doorbell Stride: 4 bytes 00:22:00.750 NVM Subsystem Reset: Not Supported 00:22:00.750 Command Sets Supported 00:22:00.750 NVM Command Set: Supported 00:22:00.750 Boot Partition: Not Supported 00:22:00.750 Memory Page Size Minimum: 4096 bytes 00:22:00.750 Memory Page Size Maximum: 4096 bytes 00:22:00.750 Persistent Memory Region: Not Supported 00:22:00.750 Optional Asynchronous Events Supported 00:22:00.750 Namespace Attribute Notices: Not Supported 00:22:00.750 Firmware Activation Notices: Not Supported 00:22:00.750 ANA Change Notices: Not Supported 00:22:00.750 PLE Aggregate Log Change Notices: Not Supported 00:22:00.750 LBA Status Info Alert Notices: Not Supported 00:22:00.750 EGE Aggregate Log Change Notices: Not Supported 00:22:00.750 Normal NVM Subsystem Shutdown event: Not Supported 00:22:00.750 Zone Descriptor Change Notices: Not Supported 00:22:00.750 Discovery Log Change Notices: Supported 00:22:00.750 Controller Attributes 00:22:00.750 128-bit Host Identifier: Not Supported 00:22:00.750 Non-Operational Permissive Mode: Not Supported 00:22:00.750 NVM Sets: Not Supported 00:22:00.750 Read Recovery Levels: Not Supported 00:22:00.750 Endurance Groups: Not Supported 00:22:00.750 Predictable Latency Mode: Not Supported 00:22:00.750 Traffic Based Keep ALive: Not Supported 00:22:00.750 Namespace Granularity: Not Supported 00:22:00.750 SQ Associations: Not Supported 00:22:00.750 UUID List: Not Supported 00:22:00.750 Multi-Domain Subsystem: Not Supported 00:22:00.750 Fixed Capacity Management: Not Supported 00:22:00.750 Variable Capacity Management: Not Supported 00:22:00.750 Delete Endurance Group: Not Supported 00:22:00.750 Delete NVM Set: Not Supported 00:22:00.750 Extended LBA Formats Supported: Not Supported 00:22:00.750 Flexible Data Placement Supported: Not Supported 00:22:00.750 00:22:00.750 Controller Memory Buffer Support 00:22:00.750 ================================ 00:22:00.750 Supported: No 00:22:00.750 00:22:00.750 Persistent Memory Region Support 00:22:00.750 ================================ 00:22:00.750 Supported: No 00:22:00.750 00:22:00.750 Admin Command Set Attributes 00:22:00.750 ============================ 00:22:00.750 Security Send/Receive: Not Supported 00:22:00.750 Format NVM: Not Supported 00:22:00.750 Firmware Activate/Download: Not Supported 00:22:00.750 Namespace Management: Not Supported 00:22:00.750 Device Self-Test: Not Supported 00:22:00.750 Directives: Not Supported 00:22:00.750 NVMe-MI: Not Supported 00:22:00.750 Virtualization Management: Not Supported 00:22:00.750 Doorbell Buffer Config: Not Supported 00:22:00.750 Get LBA Status Capability: Not Supported 00:22:00.750 Command & Feature Lockdown Capability: Not Supported 00:22:00.750 Abort Command Limit: 1 00:22:00.750 Async Event Request Limit: 1 00:22:00.750 Number of Firmware Slots: N/A 00:22:00.750 Firmware Slot 1 Read-Only: N/A 00:22:00.750 Firmware Activation Without Reset: N/A 00:22:00.750 Multiple Update Detection Support: N/A 00:22:00.750 Firmware Update Granularity: No Information Provided 00:22:00.750 Per-Namespace SMART Log: No 00:22:00.751 Asymmetric Namespace Access Log Page: Not Supported 00:22:00.751 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:00.751 Command Effects Log Page: Not Supported 00:22:00.751 Get Log Page Extended Data: Supported 00:22:00.751 Telemetry Log Pages: Not Supported 00:22:00.751 Persistent Event Log Pages: Not Supported 00:22:00.751 Supported Log Pages Log Page: May Support 00:22:00.751 Commands Supported & Effects Log Page: Not Supported 00:22:00.751 Feature Identifiers & Effects Log Page:May Support 00:22:00.751 NVMe-MI Commands & Effects Log Page: May Support 00:22:00.751 Data Area 4 for Telemetry Log: Not Supported 00:22:00.751 Error Log Page Entries Supported: 1 00:22:00.751 Keep Alive: Not Supported 00:22:00.751 00:22:00.751 NVM Command Set Attributes 00:22:00.751 ========================== 00:22:00.751 Submission Queue Entry Size 00:22:00.751 Max: 1 00:22:00.751 Min: 1 00:22:00.751 Completion Queue Entry Size 00:22:00.751 Max: 1 00:22:00.751 Min: 1 00:22:00.751 Number of Namespaces: 0 00:22:00.751 Compare Command: Not Supported 00:22:00.751 Write Uncorrectable Command: Not Supported 00:22:00.751 Dataset Management Command: Not Supported 00:22:00.751 Write Zeroes Command: Not Supported 00:22:00.751 Set Features Save Field: Not Supported 00:22:00.751 Reservations: Not Supported 00:22:00.751 Timestamp: Not Supported 00:22:00.751 Copy: Not Supported 00:22:00.751 Volatile Write Cache: Not Present 00:22:00.751 Atomic Write Unit (Normal): 1 00:22:00.751 Atomic Write Unit (PFail): 1 00:22:00.751 Atomic Compare & Write Unit: 1 00:22:00.751 Fused Compare & Write: Not Supported 00:22:00.751 Scatter-Gather List 00:22:00.751 SGL Command Set: Supported 00:22:00.751 SGL Keyed: Not Supported 00:22:00.751 SGL Bit Bucket Descriptor: Not Supported 00:22:00.751 SGL Metadata Pointer: Not Supported 00:22:00.751 Oversized SGL: Not Supported 00:22:00.751 SGL Metadata Address: Not Supported 00:22:00.751 SGL Offset: Supported 00:22:00.751 Transport SGL Data Block: Not Supported 00:22:00.751 Replay Protected Memory Block: Not Supported 00:22:00.751 00:22:00.751 Firmware Slot Information 00:22:00.751 ========================= 00:22:00.751 Active slot: 0 00:22:00.751 00:22:00.751 00:22:00.751 Error Log 00:22:00.751 ========= 00:22:00.751 00:22:00.751 Active Namespaces 00:22:00.751 ================= 00:22:00.751 Discovery Log Page 00:22:00.751 ================== 00:22:00.751 Generation Counter: 2 00:22:00.751 Number of Records: 2 00:22:00.751 Record Format: 0 00:22:00.751 00:22:00.751 Discovery Log Entry 0 00:22:00.751 ---------------------- 00:22:00.751 Transport Type: 3 (TCP) 00:22:00.751 Address Family: 1 (IPv4) 00:22:00.751 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:00.751 Entry Flags: 00:22:00.751 Duplicate Returned Information: 0 00:22:00.751 Explicit Persistent Connection Support for Discovery: 0 00:22:00.751 Transport Requirements: 00:22:00.751 Secure Channel: Not Specified 00:22:00.751 Port ID: 1 (0x0001) 00:22:00.751 Controller ID: 65535 (0xffff) 00:22:00.751 Admin Max SQ Size: 32 00:22:00.751 Transport Service Identifier: 4420 00:22:00.751 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:00.751 Transport Address: 10.0.0.1 00:22:00.751 Discovery Log Entry 1 00:22:00.751 ---------------------- 00:22:00.751 Transport Type: 3 (TCP) 00:22:00.751 Address Family: 1 (IPv4) 00:22:00.751 Subsystem Type: 2 (NVM Subsystem) 00:22:00.751 Entry Flags: 00:22:00.751 Duplicate Returned Information: 0 00:22:00.751 Explicit Persistent Connection Support for Discovery: 0 00:22:00.751 Transport Requirements: 00:22:00.751 Secure Channel: Not Specified 00:22:00.751 Port ID: 1 (0x0001) 00:22:00.751 Controller ID: 65535 (0xffff) 00:22:00.751 Admin Max SQ Size: 32 00:22:00.751 Transport Service Identifier: 4420 00:22:00.751 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:00.751 Transport Address: 10.0.0.1 00:22:00.751 17:25:30 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:01.012 get_feature(0x01) failed 00:22:01.012 get_feature(0x02) failed 00:22:01.012 get_feature(0x04) failed 00:22:01.012 ===================================================== 00:22:01.012 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:01.012 ===================================================== 00:22:01.012 Controller Capabilities/Features 00:22:01.012 ================================ 00:22:01.012 Vendor ID: 0000 00:22:01.012 Subsystem Vendor ID: 0000 00:22:01.012 Serial Number: 0d11a63f1739c8c0fc85 00:22:01.012 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:01.012 Firmware Version: 6.7.0-68 00:22:01.012 Recommended Arb Burst: 6 00:22:01.012 IEEE OUI Identifier: 00 00 00 00:22:01.012 Multi-path I/O 00:22:01.012 May have multiple subsystem ports: Yes 00:22:01.012 May have multiple controllers: Yes 00:22:01.012 Associated with SR-IOV VF: No 00:22:01.012 Max Data Transfer Size: Unlimited 00:22:01.012 Max Number of Namespaces: 1024 00:22:01.012 Max Number of I/O Queues: 128 00:22:01.012 NVMe Specification Version (VS): 1.3 00:22:01.012 NVMe Specification Version (Identify): 1.3 00:22:01.012 Maximum Queue Entries: 1024 00:22:01.012 Contiguous Queues Required: No 00:22:01.012 Arbitration Mechanisms Supported 00:22:01.012 Weighted Round Robin: Not Supported 00:22:01.012 Vendor Specific: Not Supported 00:22:01.012 Reset Timeout: 7500 ms 00:22:01.012 Doorbell Stride: 4 bytes 00:22:01.012 NVM Subsystem Reset: Not Supported 00:22:01.012 Command Sets Supported 00:22:01.012 NVM Command Set: Supported 00:22:01.012 Boot Partition: Not Supported 00:22:01.012 Memory Page Size Minimum: 4096 bytes 00:22:01.012 Memory Page Size Maximum: 4096 bytes 00:22:01.012 Persistent Memory Region: Not Supported 00:22:01.012 Optional Asynchronous Events Supported 00:22:01.012 Namespace Attribute Notices: Supported 00:22:01.012 Firmware Activation Notices: Not Supported 00:22:01.012 ANA Change Notices: Supported 00:22:01.012 PLE Aggregate Log Change Notices: Not Supported 00:22:01.012 LBA Status Info Alert Notices: Not Supported 00:22:01.012 EGE Aggregate Log Change Notices: Not Supported 00:22:01.012 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.012 Zone Descriptor Change Notices: Not Supported 00:22:01.012 Discovery Log Change Notices: Not Supported 00:22:01.012 Controller Attributes 00:22:01.012 128-bit Host Identifier: Supported 00:22:01.012 Non-Operational Permissive Mode: Not Supported 00:22:01.012 NVM Sets: Not Supported 00:22:01.012 Read Recovery Levels: Not Supported 00:22:01.012 Endurance Groups: Not Supported 00:22:01.012 Predictable Latency Mode: Not Supported 00:22:01.012 Traffic Based Keep ALive: Supported 00:22:01.012 Namespace Granularity: Not Supported 00:22:01.012 SQ Associations: Not Supported 00:22:01.012 UUID List: Not Supported 00:22:01.012 Multi-Domain Subsystem: Not Supported 00:22:01.012 Fixed Capacity Management: Not Supported 00:22:01.012 Variable Capacity Management: Not Supported 00:22:01.012 Delete Endurance Group: Not Supported 00:22:01.012 Delete NVM Set: Not Supported 00:22:01.012 Extended LBA Formats Supported: Not Supported 00:22:01.012 Flexible Data Placement Supported: Not Supported 00:22:01.012 00:22:01.012 Controller Memory Buffer Support 00:22:01.012 ================================ 00:22:01.012 Supported: No 00:22:01.012 00:22:01.012 Persistent Memory Region Support 00:22:01.012 ================================ 00:22:01.012 Supported: No 00:22:01.012 00:22:01.012 Admin Command Set Attributes 00:22:01.012 ============================ 00:22:01.012 Security Send/Receive: Not Supported 00:22:01.012 Format NVM: Not Supported 00:22:01.012 Firmware Activate/Download: Not Supported 00:22:01.012 Namespace Management: Not Supported 00:22:01.012 Device Self-Test: Not Supported 00:22:01.012 Directives: Not Supported 00:22:01.012 NVMe-MI: Not Supported 00:22:01.012 Virtualization Management: Not Supported 00:22:01.012 Doorbell Buffer Config: Not Supported 00:22:01.012 Get LBA Status Capability: Not Supported 00:22:01.012 Command & Feature Lockdown Capability: Not Supported 00:22:01.012 Abort Command Limit: 4 00:22:01.012 Async Event Request Limit: 4 00:22:01.012 Number of Firmware Slots: N/A 00:22:01.012 Firmware Slot 1 Read-Only: N/A 00:22:01.012 Firmware Activation Without Reset: N/A 00:22:01.012 Multiple Update Detection Support: N/A 00:22:01.012 Firmware Update Granularity: No Information Provided 00:22:01.012 Per-Namespace SMART Log: Yes 00:22:01.012 Asymmetric Namespace Access Log Page: Supported 00:22:01.012 ANA Transition Time : 10 sec 00:22:01.012 00:22:01.012 Asymmetric Namespace Access Capabilities 00:22:01.012 ANA Optimized State : Supported 00:22:01.012 ANA Non-Optimized State : Supported 00:22:01.012 ANA Inaccessible State : Supported 00:22:01.012 ANA Persistent Loss State : Supported 00:22:01.012 ANA Change State : Supported 00:22:01.012 ANAGRPID is not changed : No 00:22:01.012 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:01.012 00:22:01.012 ANA Group Identifier Maximum : 128 00:22:01.012 Number of ANA Group Identifiers : 128 00:22:01.012 Max Number of Allowed Namespaces : 1024 00:22:01.012 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:01.012 Command Effects Log Page: Supported 00:22:01.012 Get Log Page Extended Data: Supported 00:22:01.012 Telemetry Log Pages: Not Supported 00:22:01.012 Persistent Event Log Pages: Not Supported 00:22:01.012 Supported Log Pages Log Page: May Support 00:22:01.012 Commands Supported & Effects Log Page: Not Supported 00:22:01.012 Feature Identifiers & Effects Log Page:May Support 00:22:01.012 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.012 Data Area 4 for Telemetry Log: Not Supported 00:22:01.012 Error Log Page Entries Supported: 128 00:22:01.012 Keep Alive: Supported 00:22:01.012 Keep Alive Granularity: 1000 ms 00:22:01.012 00:22:01.012 NVM Command Set Attributes 00:22:01.012 ========================== 00:22:01.012 Submission Queue Entry Size 00:22:01.012 Max: 64 00:22:01.012 Min: 64 00:22:01.012 Completion Queue Entry Size 00:22:01.012 Max: 16 00:22:01.012 Min: 16 00:22:01.012 Number of Namespaces: 1024 00:22:01.012 Compare Command: Not Supported 00:22:01.012 Write Uncorrectable Command: Not Supported 00:22:01.012 Dataset Management Command: Supported 00:22:01.012 Write Zeroes Command: Supported 00:22:01.012 Set Features Save Field: Not Supported 00:22:01.012 Reservations: Not Supported 00:22:01.012 Timestamp: Not Supported 00:22:01.012 Copy: Not Supported 00:22:01.012 Volatile Write Cache: Present 00:22:01.012 Atomic Write Unit (Normal): 1 00:22:01.012 Atomic Write Unit (PFail): 1 00:22:01.012 Atomic Compare & Write Unit: 1 00:22:01.012 Fused Compare & Write: Not Supported 00:22:01.012 Scatter-Gather List 00:22:01.012 SGL Command Set: Supported 00:22:01.012 SGL Keyed: Not Supported 00:22:01.012 SGL Bit Bucket Descriptor: Not Supported 00:22:01.012 SGL Metadata Pointer: Not Supported 00:22:01.012 Oversized SGL: Not Supported 00:22:01.012 SGL Metadata Address: Not Supported 00:22:01.012 SGL Offset: Supported 00:22:01.012 Transport SGL Data Block: Not Supported 00:22:01.012 Replay Protected Memory Block: Not Supported 00:22:01.012 00:22:01.012 Firmware Slot Information 00:22:01.012 ========================= 00:22:01.012 Active slot: 0 00:22:01.012 00:22:01.012 Asymmetric Namespace Access 00:22:01.012 =========================== 00:22:01.012 Change Count : 0 00:22:01.012 Number of ANA Group Descriptors : 1 00:22:01.012 ANA Group Descriptor : 0 00:22:01.012 ANA Group ID : 1 00:22:01.012 Number of NSID Values : 1 00:22:01.012 Change Count : 0 00:22:01.013 ANA State : 1 00:22:01.013 Namespace Identifier : 1 00:22:01.013 00:22:01.013 Commands Supported and Effects 00:22:01.013 ============================== 00:22:01.013 Admin Commands 00:22:01.013 -------------- 00:22:01.013 Get Log Page (02h): Supported 00:22:01.013 Identify (06h): Supported 00:22:01.013 Abort (08h): Supported 00:22:01.013 Set Features (09h): Supported 00:22:01.013 Get Features (0Ah): Supported 00:22:01.013 Asynchronous Event Request (0Ch): Supported 00:22:01.013 Keep Alive (18h): Supported 00:22:01.013 I/O Commands 00:22:01.013 ------------ 00:22:01.013 Flush (00h): Supported 00:22:01.013 Write (01h): Supported LBA-Change 00:22:01.013 Read (02h): Supported 00:22:01.013 Write Zeroes (08h): Supported LBA-Change 00:22:01.013 Dataset Management (09h): Supported 00:22:01.013 00:22:01.013 Error Log 00:22:01.013 ========= 00:22:01.013 Entry: 0 00:22:01.013 Error Count: 0x3 00:22:01.013 Submission Queue Id: 0x0 00:22:01.013 Command Id: 0x5 00:22:01.013 Phase Bit: 0 00:22:01.013 Status Code: 0x2 00:22:01.013 Status Code Type: 0x0 00:22:01.013 Do Not Retry: 1 00:22:01.013 Error Location: 0x28 00:22:01.013 LBA: 0x0 00:22:01.013 Namespace: 0x0 00:22:01.013 Vendor Log Page: 0x0 00:22:01.013 ----------- 00:22:01.013 Entry: 1 00:22:01.013 Error Count: 0x2 00:22:01.013 Submission Queue Id: 0x0 00:22:01.013 Command Id: 0x5 00:22:01.013 Phase Bit: 0 00:22:01.013 Status Code: 0x2 00:22:01.013 Status Code Type: 0x0 00:22:01.013 Do Not Retry: 1 00:22:01.013 Error Location: 0x28 00:22:01.013 LBA: 0x0 00:22:01.013 Namespace: 0x0 00:22:01.013 Vendor Log Page: 0x0 00:22:01.013 ----------- 00:22:01.013 Entry: 2 00:22:01.013 Error Count: 0x1 00:22:01.013 Submission Queue Id: 0x0 00:22:01.013 Command Id: 0x4 00:22:01.013 Phase Bit: 0 00:22:01.013 Status Code: 0x2 00:22:01.013 Status Code Type: 0x0 00:22:01.013 Do Not Retry: 1 00:22:01.013 Error Location: 0x28 00:22:01.013 LBA: 0x0 00:22:01.013 Namespace: 0x0 00:22:01.013 Vendor Log Page: 0x0 00:22:01.013 00:22:01.013 Number of Queues 00:22:01.013 ================ 00:22:01.013 Number of I/O Submission Queues: 128 00:22:01.013 Number of I/O Completion Queues: 128 00:22:01.013 00:22:01.013 ZNS Specific Controller Data 00:22:01.013 ============================ 00:22:01.013 Zone Append Size Limit: 0 00:22:01.013 00:22:01.013 00:22:01.013 Active Namespaces 00:22:01.013 ================= 00:22:01.013 get_feature(0x05) failed 00:22:01.013 Namespace ID:1 00:22:01.013 Command Set Identifier: NVM (00h) 00:22:01.013 Deallocate: Supported 00:22:01.013 Deallocated/Unwritten Error: Not Supported 00:22:01.013 Deallocated Read Value: Unknown 00:22:01.013 Deallocate in Write Zeroes: Not Supported 00:22:01.013 Deallocated Guard Field: 0xFFFF 00:22:01.013 Flush: Supported 00:22:01.013 Reservation: Not Supported 00:22:01.013 Namespace Sharing Capabilities: Multiple Controllers 00:22:01.013 Size (in LBAs): 1310720 (5GiB) 00:22:01.013 Capacity (in LBAs): 1310720 (5GiB) 00:22:01.013 Utilization (in LBAs): 1310720 (5GiB) 00:22:01.013 UUID: ef68bd19-f2e7-4ebc-acfa-3dbdca2d08a4 00:22:01.013 Thin Provisioning: Not Supported 00:22:01.013 Per-NS Atomic Units: Yes 00:22:01.013 Atomic Boundary Size (Normal): 0 00:22:01.013 Atomic Boundary Size (PFail): 0 00:22:01.013 Atomic Boundary Offset: 0 00:22:01.013 NGUID/EUI64 Never Reused: No 00:22:01.013 ANA group ID: 1 00:22:01.013 Namespace Write Protected: No 00:22:01.013 Number of LBA Formats: 1 00:22:01.013 Current LBA Format: LBA Format #00 00:22:01.013 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:22:01.013 00:22:01.013 17:25:30 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:01.013 17:25:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:01.013 17:25:30 -- nvmf/common.sh@117 -- # sync 00:22:01.013 17:25:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.013 17:25:30 -- nvmf/common.sh@120 -- # set +e 00:22:01.013 17:25:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.013 17:25:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.013 rmmod nvme_tcp 00:22:01.013 rmmod nvme_fabrics 00:22:01.013 17:25:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.013 17:25:30 -- nvmf/common.sh@124 -- # set -e 00:22:01.013 17:25:30 -- nvmf/common.sh@125 -- # return 0 00:22:01.013 17:25:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:01.013 17:25:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:01.013 17:25:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:01.013 17:25:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:01.013 17:25:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.013 17:25:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:01.013 17:25:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.013 17:25:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.013 17:25:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.013 17:25:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:01.013 17:25:30 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:01.013 17:25:30 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:01.013 17:25:30 -- nvmf/common.sh@675 -- # echo 0 00:22:01.272 17:25:30 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.272 17:25:30 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:01.272 17:25:31 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:01.272 17:25:31 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.272 17:25:31 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:01.272 17:25:31 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:01.272 17:25:31 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:01.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:01.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:02.099 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:02.099 00:22:02.099 real 0m2.807s 00:22:02.099 user 0m1.007s 00:22:02.099 sys 0m1.283s 00:22:02.099 17:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:02.099 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:02.099 ************************************ 00:22:02.099 END TEST nvmf_identify_kernel_target 00:22:02.099 ************************************ 00:22:02.099 17:25:31 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:02.099 17:25:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:02.099 17:25:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:02.099 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:02.099 ************************************ 00:22:02.099 START TEST nvmf_auth 00:22:02.099 ************************************ 00:22:02.099 17:25:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:02.360 * Looking for test storage... 00:22:02.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:02.360 17:25:32 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.360 17:25:32 -- nvmf/common.sh@7 -- # uname -s 00:22:02.360 17:25:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.360 17:25:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.360 17:25:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.360 17:25:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.360 17:25:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.360 17:25:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.360 17:25:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.360 17:25:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.360 17:25:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.360 17:25:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:22:02.360 17:25:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:22:02.360 17:25:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.360 17:25:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.360 17:25:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.360 17:25:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.360 17:25:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.360 17:25:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.360 17:25:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.360 17:25:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.360 17:25:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.360 17:25:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.360 17:25:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.360 17:25:32 -- paths/export.sh@5 -- # export PATH 00:22:02.360 17:25:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.360 17:25:32 -- nvmf/common.sh@47 -- # : 0 00:22:02.360 17:25:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.360 17:25:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.360 17:25:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.360 17:25:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.360 17:25:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.360 17:25:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.360 17:25:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.360 17:25:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.360 17:25:32 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:02.360 17:25:32 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:02.360 17:25:32 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:02.360 17:25:32 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:02.360 17:25:32 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:02.360 17:25:32 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:02.360 17:25:32 -- host/auth.sh@21 -- # keys=() 00:22:02.360 17:25:32 -- host/auth.sh@77 -- # nvmftestinit 00:22:02.360 17:25:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:02.360 17:25:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.360 17:25:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:02.360 17:25:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:02.360 17:25:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:02.360 17:25:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.360 17:25:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.360 17:25:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.360 17:25:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:02.360 17:25:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:02.360 17:25:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.360 17:25:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.360 17:25:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:02.360 17:25:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:02.360 17:25:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.360 17:25:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.360 17:25:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.360 17:25:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.360 17:25:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.360 17:25:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.360 17:25:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.360 17:25:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.360 17:25:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:02.360 17:25:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:02.360 Cannot find device "nvmf_tgt_br" 00:22:02.360 17:25:32 -- nvmf/common.sh@155 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.360 Cannot find device "nvmf_tgt_br2" 00:22:02.360 17:25:32 -- nvmf/common.sh@156 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:02.360 17:25:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:02.360 Cannot find device "nvmf_tgt_br" 00:22:02.360 17:25:32 -- nvmf/common.sh@158 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:02.360 Cannot find device "nvmf_tgt_br2" 00:22:02.360 17:25:32 -- nvmf/common.sh@159 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:02.360 17:25:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:02.360 17:25:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.360 17:25:32 -- nvmf/common.sh@162 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.360 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.360 17:25:32 -- nvmf/common.sh@163 -- # true 00:22:02.360 17:25:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.360 17:25:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.620 17:25:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.620 17:25:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.620 17:25:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.620 17:25:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.620 17:25:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.620 17:25:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:02.620 17:25:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:02.620 17:25:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:02.620 17:25:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:02.620 17:25:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:02.620 17:25:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:02.620 17:25:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.620 17:25:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.620 17:25:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.620 17:25:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:02.620 17:25:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:02.620 17:25:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.620 17:25:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.620 17:25:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.620 17:25:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.620 17:25:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.620 17:25:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:02.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:22:02.620 00:22:02.620 --- 10.0.0.2 ping statistics --- 00:22:02.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.620 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:02.620 17:25:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:02.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:02.620 00:22:02.620 --- 10.0.0.3 ping statistics --- 00:22:02.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.620 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:02.620 17:25:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:02.620 00:22:02.620 --- 10.0.0.1 ping statistics --- 00:22:02.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.620 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:02.620 17:25:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.620 17:25:32 -- nvmf/common.sh@422 -- # return 0 00:22:02.620 17:25:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:02.620 17:25:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.620 17:25:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:02.620 17:25:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:02.620 17:25:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.620 17:25:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:02.620 17:25:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:02.620 17:25:32 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:22:02.620 17:25:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:02.620 17:25:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:02.620 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.620 17:25:32 -- nvmf/common.sh@470 -- # nvmfpid=90693 00:22:02.620 17:25:32 -- nvmf/common.sh@471 -- # waitforlisten 90693 00:22:02.620 17:25:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:02.620 17:25:32 -- common/autotest_common.sh@817 -- # '[' -z 90693 ']' 00:22:02.620 17:25:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.620 17:25:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:02.620 17:25:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.620 17:25:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:02.620 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:22:04.000 17:25:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:04.000 17:25:33 -- common/autotest_common.sh@850 -- # return 0 00:22:04.000 17:25:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:04.000 17:25:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:04.000 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:22:04.000 17:25:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.000 17:25:33 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:04.000 17:25:33 -- host/auth.sh@81 -- # gen_key null 32 00:22:04.000 17:25:33 -- host/auth.sh@53 -- # local digest len file key 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # local -A digests 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # digest=null 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # len=32 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # key=ab07f2e31f7200b28a7e6d67752306e4 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.N31 00:22:04.000 17:25:33 -- host/auth.sh@59 -- # format_dhchap_key ab07f2e31f7200b28a7e6d67752306e4 0 00:22:04.000 17:25:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 ab07f2e31f7200b28a7e6d67752306e4 0 00:22:04.000 17:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # key=ab07f2e31f7200b28a7e6d67752306e4 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # digest=0 00:22:04.000 17:25:33 -- nvmf/common.sh@694 -- # python - 00:22:04.000 17:25:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.N31 00:22:04.000 17:25:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.N31 00:22:04.000 17:25:33 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.N31 00:22:04.000 17:25:33 -- host/auth.sh@82 -- # gen_key null 48 00:22:04.000 17:25:33 -- host/auth.sh@53 -- # local digest len file key 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # local -A digests 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # digest=null 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # len=48 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # key=a1e1598d7997d30993037f7ee3986e9fb1484a912ebd18a3 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.U3w 00:22:04.000 17:25:33 -- host/auth.sh@59 -- # format_dhchap_key a1e1598d7997d30993037f7ee3986e9fb1484a912ebd18a3 0 00:22:04.000 17:25:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 a1e1598d7997d30993037f7ee3986e9fb1484a912ebd18a3 0 00:22:04.000 17:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # key=a1e1598d7997d30993037f7ee3986e9fb1484a912ebd18a3 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # digest=0 00:22:04.000 17:25:33 -- nvmf/common.sh@694 -- # python - 00:22:04.000 17:25:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.U3w 00:22:04.000 17:25:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.U3w 00:22:04.000 17:25:33 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.U3w 00:22:04.000 17:25:33 -- host/auth.sh@83 -- # gen_key sha256 32 00:22:04.000 17:25:33 -- host/auth.sh@53 -- # local digest len file key 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # local -A digests 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # digest=sha256 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # len=32 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # key=9683216872e15cd3534a3da5c7e516d4 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.qwv 00:22:04.000 17:25:33 -- host/auth.sh@59 -- # format_dhchap_key 9683216872e15cd3534a3da5c7e516d4 1 00:22:04.000 17:25:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 9683216872e15cd3534a3da5c7e516d4 1 00:22:04.000 17:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # key=9683216872e15cd3534a3da5c7e516d4 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # digest=1 00:22:04.000 17:25:33 -- nvmf/common.sh@694 -- # python - 00:22:04.000 17:25:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.qwv 00:22:04.000 17:25:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.qwv 00:22:04.000 17:25:33 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.qwv 00:22:04.000 17:25:33 -- host/auth.sh@84 -- # gen_key sha384 48 00:22:04.000 17:25:33 -- host/auth.sh@53 -- # local digest len file key 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # local -A digests 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # digest=sha384 00:22:04.000 17:25:33 -- host/auth.sh@56 -- # len=48 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:04.000 17:25:33 -- host/auth.sh@57 -- # key=77386b22b530d1ab8355e59aa6b8e67c20f6d269f8f1dd2c 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:22:04.000 17:25:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.QlE 00:22:04.000 17:25:33 -- host/auth.sh@59 -- # format_dhchap_key 77386b22b530d1ab8355e59aa6b8e67c20f6d269f8f1dd2c 2 00:22:04.000 17:25:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 77386b22b530d1ab8355e59aa6b8e67c20f6d269f8f1dd2c 2 00:22:04.000 17:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # key=77386b22b530d1ab8355e59aa6b8e67c20f6d269f8f1dd2c 00:22:04.000 17:25:33 -- nvmf/common.sh@693 -- # digest=2 00:22:04.000 17:25:33 -- nvmf/common.sh@694 -- # python - 00:22:04.000 17:25:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.QlE 00:22:04.000 17:25:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.QlE 00:22:04.000 17:25:33 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.QlE 00:22:04.000 17:25:33 -- host/auth.sh@85 -- # gen_key sha512 64 00:22:04.000 17:25:33 -- host/auth.sh@53 -- # local digest len file key 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:04.000 17:25:33 -- host/auth.sh@54 -- # local -A digests 00:22:04.001 17:25:33 -- host/auth.sh@56 -- # digest=sha512 00:22:04.001 17:25:33 -- host/auth.sh@56 -- # len=64 00:22:04.001 17:25:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:04.001 17:25:33 -- host/auth.sh@57 -- # key=cd4f73e0f2b8c15b419651732a7c4897b177dad4d722d5887d67b11a200c7d39 00:22:04.001 17:25:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:22:04.001 17:25:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.0ew 00:22:04.001 17:25:33 -- host/auth.sh@59 -- # format_dhchap_key cd4f73e0f2b8c15b419651732a7c4897b177dad4d722d5887d67b11a200c7d39 3 00:22:04.001 17:25:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 cd4f73e0f2b8c15b419651732a7c4897b177dad4d722d5887d67b11a200c7d39 3 00:22:04.001 17:25:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:04.001 17:25:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:04.001 17:25:33 -- nvmf/common.sh@693 -- # key=cd4f73e0f2b8c15b419651732a7c4897b177dad4d722d5887d67b11a200c7d39 00:22:04.001 17:25:33 -- nvmf/common.sh@693 -- # digest=3 00:22:04.001 17:25:33 -- nvmf/common.sh@694 -- # python - 00:22:04.001 17:25:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.0ew 00:22:04.001 17:25:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.0ew 00:22:04.001 17:25:33 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.0ew 00:22:04.001 17:25:33 -- host/auth.sh@87 -- # waitforlisten 90693 00:22:04.001 17:25:33 -- common/autotest_common.sh@817 -- # '[' -z 90693 ']' 00:22:04.001 17:25:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.001 17:25:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:04.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.001 17:25:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.001 17:25:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:04.001 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:22:04.259 17:25:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:04.259 17:25:34 -- common/autotest_common.sh@850 -- # return 0 00:22:04.259 17:25:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:04.259 17:25:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.N31 00:22:04.259 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.259 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.259 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.259 17:25:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:04.259 17:25:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.U3w 00:22:04.259 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.259 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.259 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.259 17:25:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:04.259 17:25:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qwv 00:22:04.260 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.260 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.260 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.260 17:25:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:04.260 17:25:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QlE 00:22:04.260 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.260 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.260 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.260 17:25:34 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:04.260 17:25:34 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0ew 00:22:04.260 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:04.260 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:22:04.260 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:04.260 17:25:34 -- host/auth.sh@92 -- # nvmet_auth_init 00:22:04.260 17:25:34 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:04.260 17:25:34 -- nvmf/common.sh@717 -- # local ip 00:22:04.260 17:25:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:04.260 17:25:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:04.260 17:25:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.260 17:25:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.260 17:25:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:04.260 17:25:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.260 17:25:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:04.260 17:25:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:04.260 17:25:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:04.260 17:25:34 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:04.260 17:25:34 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:04.260 17:25:34 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:04.260 17:25:34 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:04.260 17:25:34 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:04.260 17:25:34 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:04.260 17:25:34 -- nvmf/common.sh@628 -- # local block nvme 00:22:04.260 17:25:34 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:04.260 17:25:34 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:04.518 17:25:34 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:04.518 17:25:34 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:04.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:04.777 Waiting for block devices as requested 00:22:04.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:04.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.344 17:25:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:05.344 17:25:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:05.344 17:25:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:05.344 17:25:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:05.344 17:25:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:05.344 17:25:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.344 17:25:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:05.344 17:25:35 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:05.344 17:25:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:05.605 No valid GPT data, bailing 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # pt= 00:22:05.605 17:25:35 -- scripts/common.sh@392 -- # return 1 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:05.605 17:25:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:05.605 17:25:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:22:05.605 17:25:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:05.605 17:25:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:05.605 17:25:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:22:05.605 17:25:35 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:05.605 17:25:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:05.605 No valid GPT data, bailing 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # pt= 00:22:05.605 17:25:35 -- scripts/common.sh@392 -- # return 1 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:22:05.605 17:25:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:05.605 17:25:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:22:05.605 17:25:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:05.605 17:25:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:05.605 17:25:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:22:05.605 17:25:35 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:05.605 17:25:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:05.605 No valid GPT data, bailing 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:05.605 17:25:35 -- scripts/common.sh@391 -- # pt= 00:22:05.605 17:25:35 -- scripts/common.sh@392 -- # return 1 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:22:05.605 17:25:35 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:05.605 17:25:35 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:22:05.605 17:25:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:05.605 17:25:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:05.605 17:25:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.605 17:25:35 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:22:05.605 17:25:35 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:05.605 17:25:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:05.605 No valid GPT data, bailing 00:22:05.881 17:25:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:05.881 17:25:35 -- scripts/common.sh@391 -- # pt= 00:22:05.881 17:25:35 -- scripts/common.sh@392 -- # return 1 00:22:05.881 17:25:35 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:22:05.881 17:25:35 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:22:05.881 17:25:35 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:05.881 17:25:35 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:05.881 17:25:35 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:05.881 17:25:35 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:05.881 17:25:35 -- nvmf/common.sh@656 -- # echo 1 00:22:05.881 17:25:35 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:22:05.881 17:25:35 -- nvmf/common.sh@658 -- # echo 1 00:22:05.881 17:25:35 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:05.881 17:25:35 -- nvmf/common.sh@661 -- # echo tcp 00:22:05.881 17:25:35 -- nvmf/common.sh@662 -- # echo 4420 00:22:05.881 17:25:35 -- nvmf/common.sh@663 -- # echo ipv4 00:22:05.881 17:25:35 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:05.881 17:25:35 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.1 -t tcp -s 4420 00:22:05.881 00:22:05.881 Discovery Log Number of Records 2, Generation counter 2 00:22:05.881 =====Discovery Log Entry 0====== 00:22:05.881 trtype: tcp 00:22:05.881 adrfam: ipv4 00:22:05.881 subtype: current discovery subsystem 00:22:05.881 treq: not specified, sq flow control disable supported 00:22:05.881 portid: 1 00:22:05.881 trsvcid: 4420 00:22:05.881 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:05.881 traddr: 10.0.0.1 00:22:05.881 eflags: none 00:22:05.881 sectype: none 00:22:05.881 =====Discovery Log Entry 1====== 00:22:05.881 trtype: tcp 00:22:05.881 adrfam: ipv4 00:22:05.881 subtype: nvme subsystem 00:22:05.881 treq: not specified, sq flow control disable supported 00:22:05.881 portid: 1 00:22:05.881 trsvcid: 4420 00:22:05.881 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:05.881 traddr: 10.0.0.1 00:22:05.881 eflags: none 00:22:05.881 sectype: none 00:22:05.881 17:25:35 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:05.881 17:25:35 -- host/auth.sh@37 -- # echo 0 00:22:05.881 17:25:35 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:05.881 17:25:35 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:05.881 17:25:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:05.881 17:25:35 -- host/auth.sh@44 -- # digest=sha256 00:22:05.881 17:25:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:05.881 17:25:35 -- host/auth.sh@44 -- # keyid=1 00:22:05.881 17:25:35 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:05.881 17:25:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:05.881 17:25:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:05.881 17:25:35 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:05.881 17:25:35 -- host/auth.sh@100 -- # IFS=, 00:22:05.881 17:25:35 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:22:05.881 17:25:35 -- host/auth.sh@100 -- # IFS=, 00:22:05.881 17:25:35 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.881 17:25:35 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:05.881 17:25:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:05.881 17:25:35 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:22:05.881 17:25:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.881 17:25:35 -- host/auth.sh@68 -- # keyid=1 00:22:05.881 17:25:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.881 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.881 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:05.881 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:05.881 17:25:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:05.881 17:25:35 -- nvmf/common.sh@717 -- # local ip 00:22:05.881 17:25:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:05.881 17:25:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:05.881 17:25:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.881 17:25:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.881 17:25:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:05.881 17:25:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.881 17:25:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:05.881 17:25:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:05.881 17:25:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:05.881 17:25:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:05.881 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:05.881 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.152 nvme0n1 00:22:06.152 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.152 17:25:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.152 17:25:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.152 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.152 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.152 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.152 17:25:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.152 17:25:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.152 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.152 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.152 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.152 17:25:35 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:06.152 17:25:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.152 17:25:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.152 17:25:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:06.152 17:25:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.152 17:25:35 -- host/auth.sh@44 -- # digest=sha256 00:22:06.152 17:25:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:06.152 17:25:35 -- host/auth.sh@44 -- # keyid=0 00:22:06.152 17:25:35 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:06.152 17:25:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.152 17:25:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:06.152 17:25:35 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:06.152 17:25:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:22:06.152 17:25:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:06.152 17:25:35 -- host/auth.sh@68 -- # digest=sha256 00:22:06.152 17:25:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:06.152 17:25:35 -- host/auth.sh@68 -- # keyid=0 00:22:06.152 17:25:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.152 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.152 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.152 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.152 17:25:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:06.152 17:25:35 -- nvmf/common.sh@717 -- # local ip 00:22:06.152 17:25:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:06.152 17:25:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:06.152 17:25:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.153 17:25:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.153 17:25:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:06.153 17:25:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.153 17:25:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:06.153 17:25:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:06.153 17:25:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:06.153 17:25:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:06.153 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.153 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.153 nvme0n1 00:22:06.153 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.153 17:25:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.153 17:25:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.153 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.153 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.153 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.153 17:25:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.153 17:25:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.153 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.153 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.153 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.153 17:25:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.153 17:25:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:06.153 17:25:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.153 17:25:36 -- host/auth.sh@44 -- # digest=sha256 00:22:06.153 17:25:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:06.153 17:25:36 -- host/auth.sh@44 -- # keyid=1 00:22:06.153 17:25:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:06.153 17:25:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.153 17:25:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:06.153 17:25:36 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:06.153 17:25:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:22:06.153 17:25:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:06.153 17:25:36 -- host/auth.sh@68 -- # digest=sha256 00:22:06.153 17:25:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:06.153 17:25:36 -- host/auth.sh@68 -- # keyid=1 00:22:06.153 17:25:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.153 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.153 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:06.412 17:25:36 -- nvmf/common.sh@717 -- # local ip 00:22:06.412 17:25:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:06.412 17:25:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:06.412 17:25:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:06.412 17:25:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:06.412 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.412 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 nvme0n1 00:22:06.412 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.412 17:25:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.412 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.412 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.412 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.412 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.412 17:25:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:06.412 17:25:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.412 17:25:36 -- host/auth.sh@44 -- # digest=sha256 00:22:06.412 17:25:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:06.412 17:25:36 -- host/auth.sh@44 -- # keyid=2 00:22:06.412 17:25:36 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:06.412 17:25:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.412 17:25:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:06.412 17:25:36 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:06.412 17:25:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:22:06.412 17:25:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:06.412 17:25:36 -- host/auth.sh@68 -- # digest=sha256 00:22:06.412 17:25:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:06.412 17:25:36 -- host/auth.sh@68 -- # keyid=2 00:22:06.412 17:25:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.412 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.412 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.412 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.412 17:25:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:06.412 17:25:36 -- nvmf/common.sh@717 -- # local ip 00:22:06.412 17:25:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:06.412 17:25:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:06.412 17:25:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:06.412 17:25:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:06.412 17:25:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:06.412 17:25:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:06.412 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.412 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 nvme0n1 00:22:06.671 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.671 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.671 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 17:25:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.671 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.671 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.671 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.671 17:25:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:06.671 17:25:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.671 17:25:36 -- host/auth.sh@44 -- # digest=sha256 00:22:06.671 17:25:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:06.671 17:25:36 -- host/auth.sh@44 -- # keyid=3 00:22:06.671 17:25:36 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:06.671 17:25:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.671 17:25:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:06.671 17:25:36 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:06.671 17:25:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:22:06.671 17:25:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:06.671 17:25:36 -- host/auth.sh@68 -- # digest=sha256 00:22:06.671 17:25:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:06.671 17:25:36 -- host/auth.sh@68 -- # keyid=3 00:22:06.671 17:25:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.671 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.671 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:06.671 17:25:36 -- nvmf/common.sh@717 -- # local ip 00:22:06.671 17:25:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:06.671 17:25:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:06.671 17:25:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.671 17:25:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.671 17:25:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:06.671 17:25:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.671 17:25:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:06.671 17:25:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:06.671 17:25:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:06.671 17:25:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:06.671 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.671 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.671 nvme0n1 00:22:06.671 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.671 17:25:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.672 17:25:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.672 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.672 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.672 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.672 17:25:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.672 17:25:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.672 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.672 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.931 17:25:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:06.931 17:25:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # digest=sha256 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # keyid=4 00:22:06.931 17:25:36 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:06.931 17:25:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.931 17:25:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:06.931 17:25:36 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:06.931 17:25:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:22:06.931 17:25:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:06.931 17:25:36 -- host/auth.sh@68 -- # digest=sha256 00:22:06.931 17:25:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:06.931 17:25:36 -- host/auth.sh@68 -- # keyid=4 00:22:06.931 17:25:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:06.931 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.931 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:06.931 17:25:36 -- nvmf/common.sh@717 -- # local ip 00:22:06.931 17:25:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:06.931 17:25:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:06.931 17:25:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.931 17:25:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.931 17:25:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:06.931 17:25:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.931 17:25:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:06.931 17:25:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:06.931 17:25:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:06.931 17:25:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.931 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.931 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 nvme0n1 00:22:06.931 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.931 17:25:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:06.931 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.931 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.931 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.931 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.931 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.931 17:25:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.931 17:25:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:06.931 17:25:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:06.931 17:25:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # digest=sha256 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:06.931 17:25:36 -- host/auth.sh@44 -- # keyid=0 00:22:06.931 17:25:36 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:06.931 17:25:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:06.931 17:25:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:07.189 17:25:37 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:07.189 17:25:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:22:07.189 17:25:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:07.189 17:25:37 -- host/auth.sh@68 -- # digest=sha256 00:22:07.189 17:25:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:07.189 17:25:37 -- host/auth.sh@68 -- # keyid=0 00:22:07.189 17:25:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.189 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.189 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.189 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.189 17:25:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:07.189 17:25:37 -- nvmf/common.sh@717 -- # local ip 00:22:07.448 17:25:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:07.448 17:25:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:07.448 17:25:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:07.448 17:25:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:07.448 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.448 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 nvme0n1 00:22:07.448 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.448 17:25:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.448 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.448 17:25:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:07.448 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.448 17:25:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.448 17:25:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.448 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.448 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.448 17:25:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:07.448 17:25:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:07.448 17:25:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:07.448 17:25:37 -- host/auth.sh@44 -- # digest=sha256 00:22:07.448 17:25:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:07.448 17:25:37 -- host/auth.sh@44 -- # keyid=1 00:22:07.448 17:25:37 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:07.448 17:25:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:07.448 17:25:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:07.448 17:25:37 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:07.448 17:25:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:22:07.448 17:25:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:07.448 17:25:37 -- host/auth.sh@68 -- # digest=sha256 00:22:07.448 17:25:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:07.448 17:25:37 -- host/auth.sh@68 -- # keyid=1 00:22:07.448 17:25:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.448 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.448 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.448 17:25:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:07.448 17:25:37 -- nvmf/common.sh@717 -- # local ip 00:22:07.448 17:25:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:07.448 17:25:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:07.448 17:25:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:07.448 17:25:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:07.448 17:25:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:07.448 17:25:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:07.448 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.448 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.707 nvme0n1 00:22:07.707 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.707 17:25:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.707 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.707 17:25:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:07.707 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.707 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.707 17:25:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.707 17:25:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.707 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.707 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.707 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.707 17:25:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:07.707 17:25:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:07.707 17:25:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:07.707 17:25:37 -- host/auth.sh@44 -- # digest=sha256 00:22:07.707 17:25:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:07.707 17:25:37 -- host/auth.sh@44 -- # keyid=2 00:22:07.707 17:25:37 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:07.707 17:25:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:07.707 17:25:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:07.707 17:25:37 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:07.707 17:25:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:22:07.707 17:25:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:07.707 17:25:37 -- host/auth.sh@68 -- # digest=sha256 00:22:07.707 17:25:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:07.707 17:25:37 -- host/auth.sh@68 -- # keyid=2 00:22:07.707 17:25:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.707 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.707 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.707 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.707 17:25:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:07.707 17:25:37 -- nvmf/common.sh@717 -- # local ip 00:22:07.707 17:25:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:07.707 17:25:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:07.707 17:25:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.707 17:25:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.707 17:25:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:07.707 17:25:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.707 17:25:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:07.707 17:25:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:07.707 17:25:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:07.707 17:25:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:07.707 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.707 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.707 nvme0n1 00:22:07.707 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.967 17:25:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.967 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.967 17:25:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:07.967 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.967 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.967 17:25:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.967 17:25:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.967 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.967 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.967 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.967 17:25:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:07.967 17:25:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:07.967 17:25:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:07.967 17:25:37 -- host/auth.sh@44 -- # digest=sha256 00:22:07.967 17:25:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:07.967 17:25:37 -- host/auth.sh@44 -- # keyid=3 00:22:07.967 17:25:37 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:07.967 17:25:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:07.967 17:25:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:07.967 17:25:37 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:07.967 17:25:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:22:07.967 17:25:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:07.967 17:25:37 -- host/auth.sh@68 -- # digest=sha256 00:22:07.967 17:25:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:07.967 17:25:37 -- host/auth.sh@68 -- # keyid=3 00:22:07.967 17:25:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:07.967 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.967 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.967 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.967 17:25:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:07.967 17:25:37 -- nvmf/common.sh@717 -- # local ip 00:22:07.967 17:25:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:07.967 17:25:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:07.967 17:25:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.967 17:25:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.967 17:25:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:07.967 17:25:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.967 17:25:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:07.967 17:25:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:07.967 17:25:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:07.967 17:25:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:07.968 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.968 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.968 nvme0n1 00:22:07.968 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.968 17:25:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.968 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.968 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:07.968 17:25:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:07.968 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:07.968 17:25:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.968 17:25:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.968 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:07.968 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.227 17:25:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:08.227 17:25:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:08.227 17:25:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:08.227 17:25:37 -- host/auth.sh@44 -- # digest=sha256 00:22:08.227 17:25:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:08.227 17:25:37 -- host/auth.sh@44 -- # keyid=4 00:22:08.227 17:25:37 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:08.227 17:25:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:08.227 17:25:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:08.227 17:25:37 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:08.227 17:25:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:22:08.227 17:25:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:08.227 17:25:37 -- host/auth.sh@68 -- # digest=sha256 00:22:08.227 17:25:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:08.227 17:25:37 -- host/auth.sh@68 -- # keyid=4 00:22:08.227 17:25:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:08.227 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.227 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.227 17:25:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:08.227 17:25:37 -- nvmf/common.sh@717 -- # local ip 00:22:08.227 17:25:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:08.227 17:25:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:08.227 17:25:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:08.227 17:25:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:08.227 17:25:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:08.227 17:25:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:08.227 17:25:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:08.227 17:25:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:08.227 17:25:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:08.227 17:25:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.227 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.227 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 nvme0n1 00:22:08.227 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.227 17:25:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.227 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.227 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 17:25:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:08.227 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.227 17:25:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.227 17:25:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.227 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.227 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:08.227 17:25:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.227 17:25:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:08.227 17:25:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:08.227 17:25:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:08.227 17:25:38 -- host/auth.sh@44 -- # digest=sha256 00:22:08.227 17:25:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:08.227 17:25:38 -- host/auth.sh@44 -- # keyid=0 00:22:08.227 17:25:38 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:08.227 17:25:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:08.227 17:25:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:08.794 17:25:38 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:08.794 17:25:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:22:08.794 17:25:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:08.794 17:25:38 -- host/auth.sh@68 -- # digest=sha256 00:22:08.794 17:25:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:08.794 17:25:38 -- host/auth.sh@68 -- # keyid=0 00:22:08.794 17:25:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:08.794 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:08.794 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:22:09.053 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.053 17:25:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:09.053 17:25:38 -- nvmf/common.sh@717 -- # local ip 00:22:09.053 17:25:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:09.053 17:25:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:09.053 17:25:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.053 17:25:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.053 17:25:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:09.053 17:25:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.053 17:25:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:09.053 17:25:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:09.053 17:25:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:09.053 17:25:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:09.053 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.053 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:22:09.053 nvme0n1 00:22:09.053 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.053 17:25:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.053 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.053 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:22:09.053 17:25:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:09.053 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.053 17:25:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.053 17:25:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.053 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.053 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.312 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.312 17:25:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:09.312 17:25:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:09.312 17:25:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:09.312 17:25:39 -- host/auth.sh@44 -- # digest=sha256 00:22:09.312 17:25:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.312 17:25:39 -- host/auth.sh@44 -- # keyid=1 00:22:09.312 17:25:39 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:09.312 17:25:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:09.312 17:25:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:09.312 17:25:39 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:09.312 17:25:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:22:09.312 17:25:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:09.312 17:25:39 -- host/auth.sh@68 -- # digest=sha256 00:22:09.312 17:25:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:09.312 17:25:39 -- host/auth.sh@68 -- # keyid=1 00:22:09.312 17:25:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.312 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.312 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.312 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.312 17:25:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:09.312 17:25:39 -- nvmf/common.sh@717 -- # local ip 00:22:09.312 17:25:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:09.312 17:25:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:09.312 17:25:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.312 17:25:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.312 17:25:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:09.312 17:25:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.312 17:25:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:09.312 17:25:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:09.312 17:25:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:09.312 17:25:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:09.312 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.312 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.312 nvme0n1 00:22:09.312 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.313 17:25:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.313 17:25:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:09.313 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.313 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.313 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.313 17:25:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.313 17:25:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.313 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.313 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.313 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.313 17:25:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:09.313 17:25:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:09.313 17:25:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:09.313 17:25:39 -- host/auth.sh@44 -- # digest=sha256 00:22:09.313 17:25:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.313 17:25:39 -- host/auth.sh@44 -- # keyid=2 00:22:09.313 17:25:39 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:09.313 17:25:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:09.313 17:25:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:09.313 17:25:39 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:09.313 17:25:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:22:09.571 17:25:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:09.571 17:25:39 -- host/auth.sh@68 -- # digest=sha256 00:22:09.571 17:25:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:09.571 17:25:39 -- host/auth.sh@68 -- # keyid=2 00:22:09.571 17:25:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.571 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.571 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.572 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.572 17:25:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:09.572 17:25:39 -- nvmf/common.sh@717 -- # local ip 00:22:09.572 17:25:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:09.572 17:25:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:09.572 17:25:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.572 17:25:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.572 17:25:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:09.572 17:25:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.572 17:25:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:09.572 17:25:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:09.572 17:25:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:09.572 17:25:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:09.572 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.572 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.572 nvme0n1 00:22:09.572 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.572 17:25:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.572 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.572 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.572 17:25:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:09.572 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.572 17:25:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.572 17:25:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.572 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.572 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.572 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.572 17:25:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:09.572 17:25:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:09.572 17:25:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:09.572 17:25:39 -- host/auth.sh@44 -- # digest=sha256 00:22:09.572 17:25:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.572 17:25:39 -- host/auth.sh@44 -- # keyid=3 00:22:09.572 17:25:39 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:09.572 17:25:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:09.572 17:25:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:09.572 17:25:39 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:09.572 17:25:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:22:09.572 17:25:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:09.572 17:25:39 -- host/auth.sh@68 -- # digest=sha256 00:22:09.572 17:25:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:09.572 17:25:39 -- host/auth.sh@68 -- # keyid=3 00:22:09.572 17:25:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.572 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.572 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.831 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.831 17:25:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:09.831 17:25:39 -- nvmf/common.sh@717 -- # local ip 00:22:09.831 17:25:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:09.831 17:25:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:09.831 17:25:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.831 17:25:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.831 17:25:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:09.831 17:25:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.831 17:25:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:09.831 17:25:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:09.831 17:25:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:09.831 17:25:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:09.831 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.831 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.831 nvme0n1 00:22:09.831 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.831 17:25:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.831 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.831 17:25:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:09.831 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.831 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.831 17:25:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.831 17:25:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.831 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.831 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.831 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.831 17:25:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:09.831 17:25:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:09.831 17:25:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:09.831 17:25:39 -- host/auth.sh@44 -- # digest=sha256 00:22:09.831 17:25:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.831 17:25:39 -- host/auth.sh@44 -- # keyid=4 00:22:09.831 17:25:39 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:09.831 17:25:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:09.831 17:25:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:09.831 17:25:39 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:09.831 17:25:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:22:09.831 17:25:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:09.831 17:25:39 -- host/auth.sh@68 -- # digest=sha256 00:22:09.831 17:25:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:09.831 17:25:39 -- host/auth.sh@68 -- # keyid=4 00:22:09.831 17:25:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:09.831 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.831 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:10.090 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.090 17:25:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:10.090 17:25:39 -- nvmf/common.sh@717 -- # local ip 00:22:10.090 17:25:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:10.090 17:25:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:10.090 17:25:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.090 17:25:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.090 17:25:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:10.090 17:25:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.090 17:25:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:10.090 17:25:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:10.090 17:25:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:10.090 17:25:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:10.090 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.090 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:22:10.090 nvme0n1 00:22:10.090 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.090 17:25:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.090 17:25:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:10.090 17:25:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.090 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.090 17:25:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.090 17:25:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.090 17:25:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.090 17:25:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.090 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.090 17:25:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.090 17:25:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.090 17:25:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:10.349 17:25:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:10.349 17:25:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:10.349 17:25:40 -- host/auth.sh@44 -- # digest=sha256 00:22:10.349 17:25:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:10.349 17:25:40 -- host/auth.sh@44 -- # keyid=0 00:22:10.349 17:25:40 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:10.349 17:25:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:10.349 17:25:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:11.726 17:25:41 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:11.726 17:25:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:22:11.726 17:25:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:11.726 17:25:41 -- host/auth.sh@68 -- # digest=sha256 00:22:11.726 17:25:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:11.726 17:25:41 -- host/auth.sh@68 -- # keyid=0 00:22:11.726 17:25:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:11.726 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.726 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:11.726 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.726 17:25:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:11.726 17:25:41 -- nvmf/common.sh@717 -- # local ip 00:22:11.726 17:25:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:11.726 17:25:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:11.726 17:25:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.726 17:25:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.726 17:25:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:11.726 17:25:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.726 17:25:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:11.726 17:25:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:11.726 17:25:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:11.726 17:25:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:11.726 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.726 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:11.984 nvme0n1 00:22:11.984 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.984 17:25:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.984 17:25:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:11.984 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.984 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:11.984 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.984 17:25:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.984 17:25:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.984 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:11.984 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:11.984 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:11.984 17:25:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:11.984 17:25:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:11.984 17:25:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:11.984 17:25:41 -- host/auth.sh@44 -- # digest=sha256 00:22:11.984 17:25:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:11.984 17:25:41 -- host/auth.sh@44 -- # keyid=1 00:22:11.984 17:25:41 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:11.984 17:25:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:11.984 17:25:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:11.984 17:25:41 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:11.984 17:25:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:22:11.984 17:25:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:11.984 17:25:41 -- host/auth.sh@68 -- # digest=sha256 00:22:11.984 17:25:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:11.984 17:25:41 -- host/auth.sh@68 -- # keyid=1 00:22:11.984 17:25:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.243 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.243 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:12.243 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.243 17:25:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:12.243 17:25:41 -- nvmf/common.sh@717 -- # local ip 00:22:12.243 17:25:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:12.243 17:25:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:12.243 17:25:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.243 17:25:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.243 17:25:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:12.243 17:25:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.243 17:25:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:12.243 17:25:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:12.243 17:25:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:12.243 17:25:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:12.243 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.243 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:22:12.501 nvme0n1 00:22:12.501 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.501 17:25:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.501 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.501 17:25:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:12.501 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.501 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.501 17:25:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.501 17:25:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.501 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.501 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.501 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.501 17:25:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:12.501 17:25:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:12.501 17:25:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:12.501 17:25:42 -- host/auth.sh@44 -- # digest=sha256 00:22:12.501 17:25:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:12.501 17:25:42 -- host/auth.sh@44 -- # keyid=2 00:22:12.501 17:25:42 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:12.501 17:25:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:12.501 17:25:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:12.501 17:25:42 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:12.501 17:25:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:22:12.501 17:25:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:12.501 17:25:42 -- host/auth.sh@68 -- # digest=sha256 00:22:12.501 17:25:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:12.501 17:25:42 -- host/auth.sh@68 -- # keyid=2 00:22:12.501 17:25:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.501 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.501 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.501 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.501 17:25:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:12.501 17:25:42 -- nvmf/common.sh@717 -- # local ip 00:22:12.501 17:25:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:12.501 17:25:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:12.501 17:25:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.501 17:25:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.501 17:25:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:12.501 17:25:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.501 17:25:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:12.501 17:25:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:12.501 17:25:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:12.501 17:25:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:12.501 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.501 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.759 nvme0n1 00:22:12.759 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.759 17:25:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.759 17:25:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:12.759 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.759 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.759 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.759 17:25:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.759 17:25:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.759 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.759 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.759 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.759 17:25:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:12.759 17:25:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:12.759 17:25:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:12.759 17:25:42 -- host/auth.sh@44 -- # digest=sha256 00:22:12.759 17:25:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:12.759 17:25:42 -- host/auth.sh@44 -- # keyid=3 00:22:12.759 17:25:42 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:12.759 17:25:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:12.759 17:25:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:12.759 17:25:42 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:12.759 17:25:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:22:12.759 17:25:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:12.759 17:25:42 -- host/auth.sh@68 -- # digest=sha256 00:22:12.759 17:25:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:12.759 17:25:42 -- host/auth.sh@68 -- # keyid=3 00:22:12.759 17:25:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:12.759 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.759 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.759 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.017 17:25:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.017 17:25:42 -- nvmf/common.sh@717 -- # local ip 00:22:13.017 17:25:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.017 17:25:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.017 17:25:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.017 17:25:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.017 17:25:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.017 17:25:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.017 17:25:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.017 17:25:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.017 17:25:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.017 17:25:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:13.017 17:25:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.017 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:22:13.276 nvme0n1 00:22:13.276 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.276 17:25:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.276 17:25:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:13.276 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.276 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.276 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.276 17:25:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.276 17:25:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.276 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.276 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.276 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.276 17:25:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:13.276 17:25:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:13.276 17:25:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.276 17:25:43 -- host/auth.sh@44 -- # digest=sha256 00:22:13.276 17:25:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:13.276 17:25:43 -- host/auth.sh@44 -- # keyid=4 00:22:13.276 17:25:43 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:13.276 17:25:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.276 17:25:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:13.276 17:25:43 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:13.276 17:25:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:22:13.276 17:25:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:13.276 17:25:43 -- host/auth.sh@68 -- # digest=sha256 00:22:13.276 17:25:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:13.276 17:25:43 -- host/auth.sh@68 -- # keyid=4 00:22:13.276 17:25:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:13.276 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.276 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.276 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.276 17:25:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:13.276 17:25:43 -- nvmf/common.sh@717 -- # local ip 00:22:13.276 17:25:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:13.276 17:25:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:13.276 17:25:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.276 17:25:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.276 17:25:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:13.276 17:25:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.276 17:25:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:13.276 17:25:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:13.276 17:25:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:13.276 17:25:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:13.276 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.276 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 nvme0n1 00:22:13.533 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.533 17:25:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.533 17:25:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:13.533 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.533 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.533 17:25:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.533 17:25:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.533 17:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:13.533 17:25:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.533 17:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.533 17:25:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.533 17:25:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:13.533 17:25:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:13.533 17:25:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:13.533 17:25:43 -- host/auth.sh@44 -- # digest=sha256 00:22:13.533 17:25:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:13.533 17:25:43 -- host/auth.sh@44 -- # keyid=0 00:22:13.533 17:25:43 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:13.533 17:25:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:13.533 17:25:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:16.817 17:25:46 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:16.817 17:25:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:22:16.817 17:25:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.817 17:25:46 -- host/auth.sh@68 -- # digest=sha256 00:22:16.817 17:25:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:16.817 17:25:46 -- host/auth.sh@68 -- # keyid=0 00:22:16.817 17:25:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:16.817 17:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.817 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:22:16.817 17:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.817 17:25:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.817 17:25:46 -- nvmf/common.sh@717 -- # local ip 00:22:16.817 17:25:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.817 17:25:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.817 17:25:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.817 17:25:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.817 17:25:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.817 17:25:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.817 17:25:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.817 17:25:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.817 17:25:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.817 17:25:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:16.817 17:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.817 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:22:17.385 nvme0n1 00:22:17.385 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.385 17:25:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.385 17:25:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.385 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.385 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.385 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.385 17:25:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.385 17:25:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.385 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.385 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.385 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.385 17:25:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.385 17:25:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:17.385 17:25:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.385 17:25:47 -- host/auth.sh@44 -- # digest=sha256 00:22:17.385 17:25:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.385 17:25:47 -- host/auth.sh@44 -- # keyid=1 00:22:17.385 17:25:47 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:17.385 17:25:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.385 17:25:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:17.385 17:25:47 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:17.385 17:25:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:22:17.385 17:25:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.385 17:25:47 -- host/auth.sh@68 -- # digest=sha256 00:22:17.385 17:25:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:17.385 17:25:47 -- host/auth.sh@68 -- # keyid=1 00:22:17.385 17:25:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.385 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.385 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.385 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.385 17:25:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.385 17:25:47 -- nvmf/common.sh@717 -- # local ip 00:22:17.385 17:25:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.385 17:25:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.385 17:25:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.385 17:25:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.385 17:25:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.385 17:25:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.385 17:25:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.385 17:25:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.385 17:25:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.385 17:25:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:17.385 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.385 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.953 nvme0n1 00:22:17.953 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.953 17:25:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.953 17:25:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.953 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.953 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.953 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.953 17:25:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.953 17:25:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.953 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.953 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.953 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.953 17:25:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.953 17:25:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:17.953 17:25:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.953 17:25:47 -- host/auth.sh@44 -- # digest=sha256 00:22:17.953 17:25:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.953 17:25:47 -- host/auth.sh@44 -- # keyid=2 00:22:17.953 17:25:47 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:17.953 17:25:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.953 17:25:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:17.953 17:25:47 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:17.953 17:25:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:22:17.953 17:25:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.953 17:25:47 -- host/auth.sh@68 -- # digest=sha256 00:22:17.953 17:25:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:17.953 17:25:47 -- host/auth.sh@68 -- # keyid=2 00:22:17.953 17:25:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.953 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.953 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:17.953 17:25:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.953 17:25:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.953 17:25:47 -- nvmf/common.sh@717 -- # local ip 00:22:17.953 17:25:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.953 17:25:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.953 17:25:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.953 17:25:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.953 17:25:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.953 17:25:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.953 17:25:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.953 17:25:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.953 17:25:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.953 17:25:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:17.953 17:25:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.953 17:25:47 -- common/autotest_common.sh@10 -- # set +x 00:22:18.520 nvme0n1 00:22:18.520 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.520 17:25:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.520 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.520 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:18.520 17:25:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:18.520 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.520 17:25:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.520 17:25:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.520 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.520 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:18.520 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.520 17:25:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.520 17:25:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:18.520 17:25:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.520 17:25:48 -- host/auth.sh@44 -- # digest=sha256 00:22:18.520 17:25:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:18.520 17:25:48 -- host/auth.sh@44 -- # keyid=3 00:22:18.520 17:25:48 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:18.520 17:25:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.520 17:25:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:18.520 17:25:48 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:18.520 17:25:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:22:18.520 17:25:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.520 17:25:48 -- host/auth.sh@68 -- # digest=sha256 00:22:18.520 17:25:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:18.520 17:25:48 -- host/auth.sh@68 -- # keyid=3 00:22:18.520 17:25:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:18.520 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.520 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:18.520 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.520 17:25:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.520 17:25:48 -- nvmf/common.sh@717 -- # local ip 00:22:18.520 17:25:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.520 17:25:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.520 17:25:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.520 17:25:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.520 17:25:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.520 17:25:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.520 17:25:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.520 17:25:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.520 17:25:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.520 17:25:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:18.520 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.520 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 nvme0n1 00:22:19.088 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.088 17:25:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.088 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.088 17:25:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.088 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.088 17:25:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.088 17:25:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.088 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.088 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.088 17:25:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.088 17:25:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:19.088 17:25:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.088 17:25:48 -- host/auth.sh@44 -- # digest=sha256 00:22:19.088 17:25:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.088 17:25:48 -- host/auth.sh@44 -- # keyid=4 00:22:19.088 17:25:48 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:19.088 17:25:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:19.088 17:25:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:19.088 17:25:48 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:19.088 17:25:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:22:19.088 17:25:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.088 17:25:48 -- host/auth.sh@68 -- # digest=sha256 00:22:19.088 17:25:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:19.088 17:25:48 -- host/auth.sh@68 -- # keyid=4 00:22:19.088 17:25:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.088 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.088 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 17:25:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.088 17:25:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.088 17:25:48 -- nvmf/common.sh@717 -- # local ip 00:22:19.088 17:25:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.088 17:25:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.088 17:25:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.088 17:25:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.088 17:25:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.088 17:25:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.088 17:25:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.088 17:25:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.088 17:25:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.088 17:25:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:19.088 17:25:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.088 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 nvme0n1 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.656 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.656 17:25:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.656 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.656 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.656 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:19.656 17:25:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.656 17:25:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.656 17:25:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:19.656 17:25:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.656 17:25:49 -- host/auth.sh@44 -- # digest=sha384 00:22:19.656 17:25:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.656 17:25:49 -- host/auth.sh@44 -- # keyid=0 00:22:19.656 17:25:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:19.656 17:25:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:19.656 17:25:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:19.656 17:25:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:19.656 17:25:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:22:19.656 17:25:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.656 17:25:49 -- host/auth.sh@68 -- # digest=sha384 00:22:19.656 17:25:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:19.656 17:25:49 -- host/auth.sh@68 -- # keyid=0 00:22:19.656 17:25:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.656 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.656 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.656 17:25:49 -- nvmf/common.sh@717 -- # local ip 00:22:19.656 17:25:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.656 17:25:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.656 17:25:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.656 17:25:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.656 17:25:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.656 17:25:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.656 17:25:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.656 17:25:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.656 17:25:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.656 17:25:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:19.656 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.656 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 nvme0n1 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.656 17:25:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.656 17:25:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.656 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.656 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.656 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.915 17:25:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.916 17:25:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:19.916 17:25:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # digest=sha384 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # keyid=1 00:22:19.916 17:25:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:19.916 17:25:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:19.916 17:25:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:19.916 17:25:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:22:19.916 17:25:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # digest=sha384 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # keyid=1 00:22:19.916 17:25:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.916 17:25:49 -- nvmf/common.sh@717 -- # local ip 00:22:19.916 17:25:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.916 17:25:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.916 17:25:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.916 17:25:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 nvme0n1 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 17:25:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.916 17:25:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:19.916 17:25:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # digest=sha384 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@44 -- # keyid=2 00:22:19.916 17:25:49 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:19.916 17:25:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:19.916 17:25:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:19.916 17:25:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:22:19.916 17:25:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # digest=sha384 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:19.916 17:25:49 -- host/auth.sh@68 -- # keyid=2 00:22:19.916 17:25:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:19.916 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.916 17:25:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.916 17:25:49 -- nvmf/common.sh@717 -- # local ip 00:22:19.916 17:25:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.916 17:25:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.916 17:25:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.916 17:25:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.916 17:25:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.916 17:25:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:19.916 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.916 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 nvme0n1 00:22:20.175 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.175 17:25:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.175 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.175 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 17:25:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.175 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.175 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.175 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.175 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.175 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.175 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.175 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:20.175 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.175 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:20.175 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.175 17:25:50 -- host/auth.sh@44 -- # keyid=3 00:22:20.175 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:20.175 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:20.175 17:25:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:20.175 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:20.175 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:22:20.175 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.175 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:20.175 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:20.175 17:25:50 -- host/auth.sh@68 -- # keyid=3 00:22:20.175 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.175 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.175 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.175 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.175 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.175 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.175 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.175 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.175 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.175 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.175 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.175 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.175 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.175 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.175 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:20.175 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.175 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.175 nvme0n1 00:22:20.175 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.175 17:25:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.176 17:25:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.176 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.176 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.435 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:20.435 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # keyid=4 00:22:20.435 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:20.435 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:20.435 17:25:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:20.435 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:20.435 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:22:20.435 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # keyid=4 00:22:20.435 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.435 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.435 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.435 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.435 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.435 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 nvme0n1 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.435 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.435 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:20.435 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.435 17:25:50 -- host/auth.sh@44 -- # keyid=0 00:22:20.435 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:20.435 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:20.435 17:25:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:20.435 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:20.435 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:22:20.435 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:20.435 17:25:50 -- host/auth.sh@68 -- # keyid=0 00:22:20.435 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.435 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.435 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.435 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.435 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.435 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.435 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.435 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.435 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.435 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:20.435 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.435 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.694 nvme0n1 00:22:20.694 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.694 17:25:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.694 17:25:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.694 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.694 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.694 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.694 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.694 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.694 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.694 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.694 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.694 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.694 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:20.694 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.694 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:20.694 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.694 17:25:50 -- host/auth.sh@44 -- # keyid=1 00:22:20.694 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:20.694 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:20.694 17:25:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:20.694 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:20.694 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:22:20.694 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.694 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:20.694 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:20.694 17:25:50 -- host/auth.sh@68 -- # keyid=1 00:22:20.694 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:20.694 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.694 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.694 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.694 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.694 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.694 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.694 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.694 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.694 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.694 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.694 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.694 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.694 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.694 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.694 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:20.694 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.694 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 nvme0n1 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.953 17:25:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.953 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.953 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.953 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.953 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.953 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:20.953 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.953 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:20.953 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:20.953 17:25:50 -- host/auth.sh@44 -- # keyid=2 00:22:20.953 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:20.953 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:20.953 17:25:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:20.953 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:20.953 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:22:20.953 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.953 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:20.953 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:20.953 17:25:50 -- host/auth.sh@68 -- # keyid=2 00:22:20.953 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:20.953 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.953 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.953 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:20.953 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.953 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.953 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.953 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.953 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.953 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.953 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.953 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.953 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.953 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:20.953 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.953 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 nvme0n1 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.953 17:25:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.953 17:25:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.953 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.953 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:20.953 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.212 17:25:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.212 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.212 17:25:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:21.212 17:25:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.212 17:25:50 -- host/auth.sh@44 -- # digest=sha384 00:22:21.212 17:25:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.212 17:25:50 -- host/auth.sh@44 -- # keyid=3 00:22:21.212 17:25:50 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:21.212 17:25:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:21.212 17:25:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:21.212 17:25:50 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:21.212 17:25:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:22:21.212 17:25:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.212 17:25:50 -- host/auth.sh@68 -- # digest=sha384 00:22:21.212 17:25:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:21.212 17:25:50 -- host/auth.sh@68 -- # keyid=3 00:22:21.212 17:25:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:21.212 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 17:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.212 17:25:50 -- nvmf/common.sh@717 -- # local ip 00:22:21.212 17:25:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.212 17:25:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.212 17:25:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.212 17:25:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.212 17:25:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.212 17:25:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.212 17:25:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.212 17:25:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.212 17:25:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.212 17:25:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:21.212 17:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:50 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 nvme0n1 00:22:21.212 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.212 17:25:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.212 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.212 17:25:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.212 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.212 17:25:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:21.212 17:25:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.212 17:25:51 -- host/auth.sh@44 -- # digest=sha384 00:22:21.212 17:25:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:21.212 17:25:51 -- host/auth.sh@44 -- # keyid=4 00:22:21.212 17:25:51 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:21.212 17:25:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:21.212 17:25:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:21.212 17:25:51 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:21.212 17:25:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:22:21.212 17:25:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.212 17:25:51 -- host/auth.sh@68 -- # digest=sha384 00:22:21.212 17:25:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:21.212 17:25:51 -- host/auth.sh@68 -- # keyid=4 00:22:21.212 17:25:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:21.212 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.212 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.212 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.212 17:25:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.212 17:25:51 -- nvmf/common.sh@717 -- # local ip 00:22:21.212 17:25:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.212 17:25:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.213 17:25:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.213 17:25:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.213 17:25:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.213 17:25:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.213 17:25:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.213 17:25:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.213 17:25:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.213 17:25:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:21.213 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.213 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.472 nvme0n1 00:22:21.472 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.472 17:25:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.472 17:25:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.472 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.472 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.472 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.472 17:25:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.472 17:25:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.472 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.472 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.472 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.472 17:25:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.472 17:25:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.472 17:25:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:21.472 17:25:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.472 17:25:51 -- host/auth.sh@44 -- # digest=sha384 00:22:21.472 17:25:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.472 17:25:51 -- host/auth.sh@44 -- # keyid=0 00:22:21.472 17:25:51 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:21.472 17:25:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:21.472 17:25:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:21.472 17:25:51 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:21.472 17:25:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:22:21.472 17:25:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.472 17:25:51 -- host/auth.sh@68 -- # digest=sha384 00:22:21.472 17:25:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:21.472 17:25:51 -- host/auth.sh@68 -- # keyid=0 00:22:21.472 17:25:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:21.472 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.472 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.472 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.472 17:25:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.472 17:25:51 -- nvmf/common.sh@717 -- # local ip 00:22:21.472 17:25:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.472 17:25:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.472 17:25:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.472 17:25:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.472 17:25:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.472 17:25:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.472 17:25:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.472 17:25:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.472 17:25:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.472 17:25:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:21.472 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.472 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.731 nvme0n1 00:22:21.731 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.731 17:25:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.731 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.731 17:25:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.731 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.731 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.731 17:25:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.731 17:25:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.731 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.731 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.731 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.731 17:25:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.731 17:25:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:21.731 17:25:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.731 17:25:51 -- host/auth.sh@44 -- # digest=sha384 00:22:21.731 17:25:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.731 17:25:51 -- host/auth.sh@44 -- # keyid=1 00:22:21.731 17:25:51 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:21.731 17:25:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:21.731 17:25:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:21.731 17:25:51 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:21.731 17:25:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:22:21.731 17:25:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.731 17:25:51 -- host/auth.sh@68 -- # digest=sha384 00:22:21.731 17:25:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:21.731 17:25:51 -- host/auth.sh@68 -- # keyid=1 00:22:21.731 17:25:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:21.731 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.731 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.731 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.731 17:25:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.731 17:25:51 -- nvmf/common.sh@717 -- # local ip 00:22:21.731 17:25:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.731 17:25:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.731 17:25:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.731 17:25:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.731 17:25:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.731 17:25:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.731 17:25:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.731 17:25:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.731 17:25:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.731 17:25:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:21.731 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.731 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.989 nvme0n1 00:22:21.989 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.989 17:25:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.989 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.989 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.989 17:25:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.989 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.989 17:25:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.989 17:25:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.989 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.989 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.989 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.989 17:25:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.989 17:25:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:21.989 17:25:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.989 17:25:51 -- host/auth.sh@44 -- # digest=sha384 00:22:21.989 17:25:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:21.989 17:25:51 -- host/auth.sh@44 -- # keyid=2 00:22:21.989 17:25:51 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:21.989 17:25:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:21.989 17:25:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:21.989 17:25:51 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:21.989 17:25:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:22:21.989 17:25:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.989 17:25:51 -- host/auth.sh@68 -- # digest=sha384 00:22:21.989 17:25:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:21.989 17:25:51 -- host/auth.sh@68 -- # keyid=2 00:22:21.989 17:25:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:21.989 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.989 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:21.989 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.989 17:25:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.989 17:25:51 -- nvmf/common.sh@717 -- # local ip 00:22:21.989 17:25:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.989 17:25:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.989 17:25:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.989 17:25:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.989 17:25:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.989 17:25:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.989 17:25:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.989 17:25:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.989 17:25:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.989 17:25:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.989 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.989 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:22.247 nvme0n1 00:22:22.247 17:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.247 17:25:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.247 17:25:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:22.248 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.248 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:22:22.248 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.248 17:25:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.248 17:25:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.248 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.248 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.248 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.248 17:25:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.248 17:25:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:22.248 17:25:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.248 17:25:52 -- host/auth.sh@44 -- # digest=sha384 00:22:22.248 17:25:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.248 17:25:52 -- host/auth.sh@44 -- # keyid=3 00:22:22.248 17:25:52 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:22.248 17:25:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:22.248 17:25:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:22.248 17:25:52 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:22.248 17:25:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:22:22.248 17:25:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.248 17:25:52 -- host/auth.sh@68 -- # digest=sha384 00:22:22.248 17:25:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:22.248 17:25:52 -- host/auth.sh@68 -- # keyid=3 00:22:22.248 17:25:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:22.248 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.248 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.248 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.248 17:25:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.248 17:25:52 -- nvmf/common.sh@717 -- # local ip 00:22:22.248 17:25:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.248 17:25:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.248 17:25:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.248 17:25:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.248 17:25:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.248 17:25:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.248 17:25:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.248 17:25:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.248 17:25:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.248 17:25:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:22.248 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.248 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.507 nvme0n1 00:22:22.507 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.507 17:25:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.507 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.507 17:25:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:22.507 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.507 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.507 17:25:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.507 17:25:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.507 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.507 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.507 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.507 17:25:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.507 17:25:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:22.507 17:25:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.507 17:25:52 -- host/auth.sh@44 -- # digest=sha384 00:22:22.507 17:25:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:22.507 17:25:52 -- host/auth.sh@44 -- # keyid=4 00:22:22.507 17:25:52 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:22.507 17:25:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:22.507 17:25:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:22.507 17:25:52 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:22.507 17:25:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:22:22.507 17:25:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.507 17:25:52 -- host/auth.sh@68 -- # digest=sha384 00:22:22.507 17:25:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:22.507 17:25:52 -- host/auth.sh@68 -- # keyid=4 00:22:22.507 17:25:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:22.507 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.507 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.507 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.507 17:25:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.507 17:25:52 -- nvmf/common.sh@717 -- # local ip 00:22:22.507 17:25:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.507 17:25:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.507 17:25:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.507 17:25:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.507 17:25:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.507 17:25:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.507 17:25:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.507 17:25:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.507 17:25:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.507 17:25:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:22.507 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.507 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 nvme0n1 00:22:22.765 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.765 17:25:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.765 17:25:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:22.765 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.765 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.765 17:25:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.765 17:25:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.765 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.765 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.765 17:25:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.765 17:25:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.765 17:25:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:22.765 17:25:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.765 17:25:52 -- host/auth.sh@44 -- # digest=sha384 00:22:22.765 17:25:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:22.765 17:25:52 -- host/auth.sh@44 -- # keyid=0 00:22:22.765 17:25:52 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:22.765 17:25:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:22.765 17:25:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:22.765 17:25:52 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:22.765 17:25:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:22:22.765 17:25:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.765 17:25:52 -- host/auth.sh@68 -- # digest=sha384 00:22:22.765 17:25:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:22.765 17:25:52 -- host/auth.sh@68 -- # keyid=0 00:22:22.765 17:25:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:22.765 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.765 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:22.765 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.765 17:25:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.765 17:25:52 -- nvmf/common.sh@717 -- # local ip 00:22:22.765 17:25:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.765 17:25:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.765 17:25:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.765 17:25:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.765 17:25:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.765 17:25:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.765 17:25:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.765 17:25:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.765 17:25:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.765 17:25:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:22.765 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.765 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.023 nvme0n1 00:22:23.023 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.024 17:25:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.024 17:25:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.024 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.024 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.024 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.024 17:25:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.024 17:25:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.024 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.024 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.024 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.024 17:25:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:23.024 17:25:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:23.024 17:25:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:23.024 17:25:52 -- host/auth.sh@44 -- # digest=sha384 00:22:23.024 17:25:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.024 17:25:52 -- host/auth.sh@44 -- # keyid=1 00:22:23.024 17:25:52 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:23.024 17:25:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:23.024 17:25:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:23.024 17:25:52 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:23.024 17:25:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:22:23.024 17:25:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:23.024 17:25:52 -- host/auth.sh@68 -- # digest=sha384 00:22:23.024 17:25:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:23.024 17:25:52 -- host/auth.sh@68 -- # keyid=1 00:22:23.024 17:25:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:23.024 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.024 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.024 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.024 17:25:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:23.024 17:25:52 -- nvmf/common.sh@717 -- # local ip 00:22:23.024 17:25:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.024 17:25:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.024 17:25:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.024 17:25:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.024 17:25:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:23.024 17:25:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.024 17:25:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:23.024 17:25:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:23.024 17:25:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:23.024 17:25:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:23.024 17:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.024 17:25:52 -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 nvme0n1 00:22:23.285 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.285 17:25:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.285 17:25:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.285 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.285 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.285 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.549 17:25:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.549 17:25:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.549 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.549 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.549 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.549 17:25:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:23.549 17:25:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:23.549 17:25:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:23.549 17:25:53 -- host/auth.sh@44 -- # digest=sha384 00:22:23.549 17:25:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.549 17:25:53 -- host/auth.sh@44 -- # keyid=2 00:22:23.549 17:25:53 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:23.549 17:25:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:23.549 17:25:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:23.549 17:25:53 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:23.549 17:25:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:22:23.549 17:25:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:23.549 17:25:53 -- host/auth.sh@68 -- # digest=sha384 00:22:23.549 17:25:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:23.549 17:25:53 -- host/auth.sh@68 -- # keyid=2 00:22:23.549 17:25:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:23.549 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.549 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.549 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.549 17:25:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:23.549 17:25:53 -- nvmf/common.sh@717 -- # local ip 00:22:23.549 17:25:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.549 17:25:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.549 17:25:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.549 17:25:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.549 17:25:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:23.549 17:25:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.549 17:25:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:23.549 17:25:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:23.549 17:25:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:23.549 17:25:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:23.549 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.549 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.807 nvme0n1 00:22:23.807 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.807 17:25:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.807 17:25:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.807 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.807 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.807 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.807 17:25:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.807 17:25:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.807 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.807 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.807 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.807 17:25:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:23.807 17:25:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:23.807 17:25:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:23.807 17:25:53 -- host/auth.sh@44 -- # digest=sha384 00:22:23.807 17:25:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:23.807 17:25:53 -- host/auth.sh@44 -- # keyid=3 00:22:23.807 17:25:53 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:23.807 17:25:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:23.807 17:25:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:23.807 17:25:53 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:23.807 17:25:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:22:23.807 17:25:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:23.807 17:25:53 -- host/auth.sh@68 -- # digest=sha384 00:22:23.807 17:25:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:23.807 17:25:53 -- host/auth.sh@68 -- # keyid=3 00:22:23.807 17:25:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:23.807 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.807 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:23.807 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.807 17:25:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:23.807 17:25:53 -- nvmf/common.sh@717 -- # local ip 00:22:23.807 17:25:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.807 17:25:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.807 17:25:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.807 17:25:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.807 17:25:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:23.807 17:25:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.807 17:25:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:23.807 17:25:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:23.807 17:25:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:23.807 17:25:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:23.807 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.807 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 nvme0n1 00:22:24.065 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.065 17:25:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.065 17:25:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.065 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.065 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.065 17:25:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.065 17:25:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.065 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.065 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.065 17:25:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.065 17:25:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:24.065 17:25:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.065 17:25:54 -- host/auth.sh@44 -- # digest=sha384 00:22:24.065 17:25:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:24.065 17:25:54 -- host/auth.sh@44 -- # keyid=4 00:22:24.065 17:25:54 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:24.065 17:25:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.065 17:25:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:24.065 17:25:54 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:24.065 17:25:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:22:24.065 17:25:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.065 17:25:54 -- host/auth.sh@68 -- # digest=sha384 00:22:24.065 17:25:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:24.065 17:25:54 -- host/auth.sh@68 -- # keyid=4 00:22:24.065 17:25:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:24.065 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.065 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.065 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.065 17:25:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.065 17:25:54 -- nvmf/common.sh@717 -- # local ip 00:22:24.065 17:25:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.065 17:25:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.065 17:25:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.065 17:25:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.065 17:25:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.065 17:25:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.065 17:25:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.065 17:25:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.065 17:25:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.065 17:25:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.065 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.065 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.630 nvme0n1 00:22:24.630 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.630 17:25:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.630 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.630 17:25:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.630 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.630 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.630 17:25:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.630 17:25:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.630 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.630 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.630 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.630 17:25:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.630 17:25:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.630 17:25:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:24.630 17:25:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.630 17:25:54 -- host/auth.sh@44 -- # digest=sha384 00:22:24.630 17:25:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:24.630 17:25:54 -- host/auth.sh@44 -- # keyid=0 00:22:24.630 17:25:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:24.630 17:25:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.630 17:25:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:24.630 17:25:54 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:24.630 17:25:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:22:24.630 17:25:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.630 17:25:54 -- host/auth.sh@68 -- # digest=sha384 00:22:24.630 17:25:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:24.630 17:25:54 -- host/auth.sh@68 -- # keyid=0 00:22:24.630 17:25:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:24.630 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.630 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.630 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.630 17:25:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.630 17:25:54 -- nvmf/common.sh@717 -- # local ip 00:22:24.630 17:25:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.630 17:25:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.630 17:25:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.630 17:25:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.630 17:25:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.630 17:25:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.630 17:25:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.630 17:25:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.630 17:25:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.630 17:25:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:24.630 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.630 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 nvme0n1 00:22:25.195 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.195 17:25:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.195 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.195 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 17:25:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.195 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.195 17:25:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.195 17:25:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.195 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.195 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.195 17:25:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.195 17:25:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:25.195 17:25:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.195 17:25:54 -- host/auth.sh@44 -- # digest=sha384 00:22:25.195 17:25:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:25.195 17:25:54 -- host/auth.sh@44 -- # keyid=1 00:22:25.195 17:25:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:25.195 17:25:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.195 17:25:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:25.195 17:25:54 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:25.195 17:25:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:22:25.195 17:25:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.195 17:25:54 -- host/auth.sh@68 -- # digest=sha384 00:22:25.195 17:25:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:25.195 17:25:54 -- host/auth.sh@68 -- # keyid=1 00:22:25.195 17:25:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:25.195 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.195 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.195 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.195 17:25:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.195 17:25:54 -- nvmf/common.sh@717 -- # local ip 00:22:25.195 17:25:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.195 17:25:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.195 17:25:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.195 17:25:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.195 17:25:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.195 17:25:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.195 17:25:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.195 17:25:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.195 17:25:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.195 17:25:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:25.195 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.195 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:22:25.769 nvme0n1 00:22:25.769 17:25:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.769 17:25:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.769 17:25:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.769 17:25:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.769 17:25:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.769 17:25:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.769 17:25:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.769 17:25:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.769 17:25:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.769 17:25:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.770 17:25:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.770 17:25:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.770 17:25:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:25.770 17:25:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.770 17:25:55 -- host/auth.sh@44 -- # digest=sha384 00:22:25.770 17:25:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:25.770 17:25:55 -- host/auth.sh@44 -- # keyid=2 00:22:25.770 17:25:55 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:25.770 17:25:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.770 17:25:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:25.770 17:25:55 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:25.770 17:25:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:22:25.770 17:25:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.770 17:25:55 -- host/auth.sh@68 -- # digest=sha384 00:22:25.770 17:25:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:25.770 17:25:55 -- host/auth.sh@68 -- # keyid=2 00:22:25.770 17:25:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:25.770 17:25:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.770 17:25:55 -- common/autotest_common.sh@10 -- # set +x 00:22:25.770 17:25:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.770 17:25:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.770 17:25:55 -- nvmf/common.sh@717 -- # local ip 00:22:25.770 17:25:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.770 17:25:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.770 17:25:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.770 17:25:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.770 17:25:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.770 17:25:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.770 17:25:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.770 17:25:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.770 17:25:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.770 17:25:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:25.770 17:25:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.770 17:25:55 -- common/autotest_common.sh@10 -- # set +x 00:22:26.361 nvme0n1 00:22:26.361 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.361 17:25:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.361 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.361 17:25:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.361 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.361 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.362 17:25:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.362 17:25:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.362 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.362 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.362 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.362 17:25:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.362 17:25:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:26.362 17:25:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.362 17:25:56 -- host/auth.sh@44 -- # digest=sha384 00:22:26.362 17:25:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.362 17:25:56 -- host/auth.sh@44 -- # keyid=3 00:22:26.362 17:25:56 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:26.362 17:25:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.362 17:25:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:26.362 17:25:56 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:26.362 17:25:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:22:26.362 17:25:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.362 17:25:56 -- host/auth.sh@68 -- # digest=sha384 00:22:26.362 17:25:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:26.362 17:25:56 -- host/auth.sh@68 -- # keyid=3 00:22:26.362 17:25:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:26.362 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.362 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.362 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.362 17:25:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.362 17:25:56 -- nvmf/common.sh@717 -- # local ip 00:22:26.362 17:25:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.362 17:25:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.362 17:25:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.362 17:25:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.362 17:25:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.362 17:25:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.362 17:25:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.362 17:25:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.362 17:25:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.362 17:25:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:26.362 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.362 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.928 nvme0n1 00:22:26.928 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.928 17:25:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.928 17:25:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.928 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.928 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.928 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.928 17:25:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.928 17:25:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.928 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.928 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.928 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.928 17:25:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.928 17:25:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:26.928 17:25:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.928 17:25:56 -- host/auth.sh@44 -- # digest=sha384 00:22:26.928 17:25:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:26.928 17:25:56 -- host/auth.sh@44 -- # keyid=4 00:22:26.928 17:25:56 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:26.928 17:25:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.928 17:25:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:26.928 17:25:56 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:26.928 17:25:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:22:26.928 17:25:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.928 17:25:56 -- host/auth.sh@68 -- # digest=sha384 00:22:26.928 17:25:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:26.928 17:25:56 -- host/auth.sh@68 -- # keyid=4 00:22:26.928 17:25:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:26.928 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.928 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.928 17:25:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.928 17:25:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.928 17:25:56 -- nvmf/common.sh@717 -- # local ip 00:22:26.928 17:25:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.928 17:25:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.928 17:25:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.928 17:25:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.928 17:25:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.928 17:25:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.928 17:25:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.928 17:25:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.928 17:25:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.928 17:25:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:26.928 17:25:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.928 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:22:27.494 nvme0n1 00:22:27.494 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:27.495 17:25:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.495 17:25:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.495 17:25:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:27.495 17:25:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # digest=sha512 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # keyid=0 00:22:27.495 17:25:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:27.495 17:25:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:27.495 17:25:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:27.495 17:25:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:22:27.495 17:25:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # digest=sha512 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # keyid=0 00:22:27.495 17:25:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.495 17:25:57 -- nvmf/common.sh@717 -- # local ip 00:22:27.495 17:25:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.495 17:25:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.495 17:25:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.495 17:25:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 nvme0n1 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.495 17:25:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:27.495 17:25:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # digest=sha512 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@44 -- # keyid=1 00:22:27.495 17:25:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:27.495 17:25:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:27.495 17:25:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:27.495 17:25:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:22:27.495 17:25:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # digest=sha512 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:27.495 17:25:57 -- host/auth.sh@68 -- # keyid=1 00:22:27.495 17:25:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.495 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.495 17:25:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.495 17:25:57 -- nvmf/common.sh@717 -- # local ip 00:22:27.495 17:25:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.495 17:25:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.495 17:25:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.495 17:25:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.495 17:25:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.495 17:25:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:27.495 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.495 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.753 nvme0n1 00:22:27.753 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.753 17:25:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.753 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.753 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.753 17:25:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.753 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.754 17:25:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.754 17:25:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.754 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.754 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.754 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.754 17:25:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.754 17:25:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:27.754 17:25:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.754 17:25:57 -- host/auth.sh@44 -- # digest=sha512 00:22:27.754 17:25:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:27.754 17:25:57 -- host/auth.sh@44 -- # keyid=2 00:22:27.754 17:25:57 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:27.754 17:25:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:27.754 17:25:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:27.754 17:25:57 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:27.754 17:25:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:22:27.754 17:25:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.754 17:25:57 -- host/auth.sh@68 -- # digest=sha512 00:22:27.754 17:25:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:27.754 17:25:57 -- host/auth.sh@68 -- # keyid=2 00:22:27.754 17:25:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.754 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.754 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.754 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.754 17:25:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.754 17:25:57 -- nvmf/common.sh@717 -- # local ip 00:22:27.754 17:25:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.754 17:25:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.754 17:25:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.754 17:25:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.754 17:25:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.754 17:25:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.754 17:25:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.754 17:25:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.754 17:25:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.754 17:25:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:27.754 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.754 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:27.754 nvme0n1 00:22:27.754 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.754 17:25:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.754 17:25:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.754 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.754 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.012 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.012 17:25:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.012 17:25:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.012 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.012 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.012 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.012 17:25:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.012 17:25:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:28.012 17:25:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.012 17:25:57 -- host/auth.sh@44 -- # digest=sha512 00:22:28.012 17:25:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:28.012 17:25:57 -- host/auth.sh@44 -- # keyid=3 00:22:28.012 17:25:57 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:28.012 17:25:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.012 17:25:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:28.012 17:25:57 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:28.012 17:25:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:22:28.013 17:25:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # digest=sha512 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # keyid=3 00:22:28.013 17:25:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.013 17:25:57 -- nvmf/common.sh@717 -- # local ip 00:22:28.013 17:25:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.013 17:25:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.013 17:25:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.013 17:25:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 nvme0n1 00:22:28.013 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.013 17:25:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.013 17:25:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:28.013 17:25:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.013 17:25:57 -- host/auth.sh@44 -- # digest=sha512 00:22:28.013 17:25:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:28.013 17:25:57 -- host/auth.sh@44 -- # keyid=4 00:22:28.013 17:25:57 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:28.013 17:25:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.013 17:25:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:28.013 17:25:57 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:28.013 17:25:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:22:28.013 17:25:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # digest=sha512 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:28.013 17:25:57 -- host/auth.sh@68 -- # keyid=4 00:22:28.013 17:25:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.013 17:25:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.013 17:25:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.013 17:25:57 -- nvmf/common.sh@717 -- # local ip 00:22:28.013 17:25:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.013 17:25:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.013 17:25:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.013 17:25:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.013 17:25:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.013 17:25:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:28.013 17:25:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.013 17:25:57 -- common/autotest_common.sh@10 -- # set +x 00:22:28.271 nvme0n1 00:22:28.271 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.271 17:25:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.271 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.271 17:25:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.271 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.271 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.271 17:25:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.271 17:25:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.271 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.271 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.271 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.271 17:25:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.271 17:25:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.271 17:25:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:28.271 17:25:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.271 17:25:58 -- host/auth.sh@44 -- # digest=sha512 00:22:28.271 17:25:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.271 17:25:58 -- host/auth.sh@44 -- # keyid=0 00:22:28.271 17:25:58 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:28.271 17:25:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.271 17:25:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.271 17:25:58 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:28.271 17:25:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:22:28.271 17:25:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.271 17:25:58 -- host/auth.sh@68 -- # digest=sha512 00:22:28.271 17:25:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.271 17:25:58 -- host/auth.sh@68 -- # keyid=0 00:22:28.271 17:25:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.271 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.271 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.271 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.271 17:25:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.271 17:25:58 -- nvmf/common.sh@717 -- # local ip 00:22:28.271 17:25:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.271 17:25:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.271 17:25:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.271 17:25:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.271 17:25:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.271 17:25:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.271 17:25:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.271 17:25:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.271 17:25:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.271 17:25:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:28.272 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.272 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 nvme0n1 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.530 17:25:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:28.530 17:25:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.530 17:25:58 -- host/auth.sh@44 -- # digest=sha512 00:22:28.530 17:25:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.530 17:25:58 -- host/auth.sh@44 -- # keyid=1 00:22:28.530 17:25:58 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:28.530 17:25:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.530 17:25:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.530 17:25:58 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:28.530 17:25:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:22:28.530 17:25:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.530 17:25:58 -- host/auth.sh@68 -- # digest=sha512 00:22:28.530 17:25:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.530 17:25:58 -- host/auth.sh@68 -- # keyid=1 00:22:28.530 17:25:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.530 17:25:58 -- nvmf/common.sh@717 -- # local ip 00:22:28.530 17:25:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.530 17:25:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.530 17:25:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.530 17:25:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.530 17:25:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.530 17:25:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.530 17:25:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.530 17:25:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.530 17:25:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.530 17:25:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 nvme0n1 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.530 17:25:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.530 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.530 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.788 17:25:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:28.788 17:25:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # digest=sha512 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # keyid=2 00:22:28.788 17:25:58 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:28.788 17:25:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.788 17:25:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:28.788 17:25:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:22:28.788 17:25:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # digest=sha512 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # keyid=2 00:22:28.788 17:25:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.788 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.788 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.788 17:25:58 -- nvmf/common.sh@717 -- # local ip 00:22:28.788 17:25:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.788 17:25:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.788 17:25:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.788 17:25:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.788 17:25:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.788 17:25:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.788 17:25:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.788 17:25:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.788 17:25:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.788 17:25:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:28.788 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.788 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 nvme0n1 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.788 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.788 17:25:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.788 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.788 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.788 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.788 17:25:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:28.788 17:25:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # digest=sha512 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@44 -- # keyid=3 00:22:28.788 17:25:58 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:28.788 17:25:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:28.788 17:25:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:28.788 17:25:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:22:28.788 17:25:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # digest=sha512 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:28.788 17:25:58 -- host/auth.sh@68 -- # keyid=3 00:22:28.788 17:25:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.788 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.788 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.788 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.788 17:25:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.789 17:25:58 -- nvmf/common.sh@717 -- # local ip 00:22:28.789 17:25:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.789 17:25:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.789 17:25:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.789 17:25:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.789 17:25:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.789 17:25:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.789 17:25:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.789 17:25:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.789 17:25:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.789 17:25:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:28.789 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.789 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.046 nvme0n1 00:22:29.046 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.046 17:25:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.046 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.046 17:25:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.047 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.047 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.047 17:25:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.047 17:25:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.047 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.047 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.047 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.047 17:25:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.047 17:25:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:29.047 17:25:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.047 17:25:58 -- host/auth.sh@44 -- # digest=sha512 00:22:29.047 17:25:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:29.047 17:25:58 -- host/auth.sh@44 -- # keyid=4 00:22:29.047 17:25:58 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:29.047 17:25:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:29.047 17:25:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:29.047 17:25:58 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:29.047 17:25:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:22:29.047 17:25:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.047 17:25:58 -- host/auth.sh@68 -- # digest=sha512 00:22:29.047 17:25:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:29.047 17:25:58 -- host/auth.sh@68 -- # keyid=4 00:22:29.047 17:25:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.047 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.047 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.047 17:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.047 17:25:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.047 17:25:58 -- nvmf/common.sh@717 -- # local ip 00:22:29.047 17:25:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.047 17:25:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.047 17:25:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.047 17:25:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.047 17:25:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.047 17:25:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.047 17:25:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.047 17:25:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.047 17:25:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.047 17:25:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.047 17:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.047 17:25:58 -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 nvme0n1 00:22:29.305 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.305 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.305 17:25:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.305 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.305 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.305 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.305 17:25:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.305 17:25:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:29.305 17:25:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.305 17:25:59 -- host/auth.sh@44 -- # digest=sha512 00:22:29.305 17:25:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.305 17:25:59 -- host/auth.sh@44 -- # keyid=0 00:22:29.305 17:25:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:29.305 17:25:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:29.305 17:25:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.305 17:25:59 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:29.305 17:25:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:22:29.305 17:25:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.305 17:25:59 -- host/auth.sh@68 -- # digest=sha512 00:22:29.305 17:25:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.305 17:25:59 -- host/auth.sh@68 -- # keyid=0 00:22:29.305 17:25:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.305 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.305 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.305 17:25:59 -- nvmf/common.sh@717 -- # local ip 00:22:29.305 17:25:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.305 17:25:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.305 17:25:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.305 17:25:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.305 17:25:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.305 17:25:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.305 17:25:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.305 17:25:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.305 17:25:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.305 17:25:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:29.305 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.305 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.305 nvme0n1 00:22:29.305 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.305 17:25:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.305 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.305 17:25:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.305 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.564 17:25:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.564 17:25:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.564 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.564 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.564 17:25:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.564 17:25:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:29.564 17:25:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.564 17:25:59 -- host/auth.sh@44 -- # digest=sha512 00:22:29.564 17:25:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.564 17:25:59 -- host/auth.sh@44 -- # keyid=1 00:22:29.564 17:25:59 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:29.564 17:25:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:29.564 17:25:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.564 17:25:59 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:29.564 17:25:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:22:29.564 17:25:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.564 17:25:59 -- host/auth.sh@68 -- # digest=sha512 00:22:29.564 17:25:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.564 17:25:59 -- host/auth.sh@68 -- # keyid=1 00:22:29.564 17:25:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.564 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.564 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.564 17:25:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.564 17:25:59 -- nvmf/common.sh@717 -- # local ip 00:22:29.564 17:25:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.564 17:25:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.564 17:25:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.564 17:25:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.564 17:25:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.564 17:25:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.564 17:25:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.564 17:25:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.564 17:25:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.564 17:25:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:29.564 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.564 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 nvme0n1 00:22:29.564 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.564 17:25:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.564 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.564 17:25:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.564 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.564 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.823 17:25:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.823 17:25:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.823 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.823 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.823 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.823 17:25:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.823 17:25:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:29.823 17:25:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.823 17:25:59 -- host/auth.sh@44 -- # digest=sha512 00:22:29.823 17:25:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:29.823 17:25:59 -- host/auth.sh@44 -- # keyid=2 00:22:29.823 17:25:59 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:29.823 17:25:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:29.823 17:25:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:29.823 17:25:59 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:29.823 17:25:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:22:29.823 17:25:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.823 17:25:59 -- host/auth.sh@68 -- # digest=sha512 00:22:29.823 17:25:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:29.823 17:25:59 -- host/auth.sh@68 -- # keyid=2 00:22:29.823 17:25:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.823 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.823 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.823 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.823 17:25:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.823 17:25:59 -- nvmf/common.sh@717 -- # local ip 00:22:29.823 17:25:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.823 17:25:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.823 17:25:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.823 17:25:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.823 17:25:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.823 17:25:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.823 17:25:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.823 17:25:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.823 17:25:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.823 17:25:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:29.823 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.823 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.823 nvme0n1 00:22:29.823 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.823 17:25:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.823 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.823 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:29.823 17:25:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.823 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.081 17:25:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.081 17:25:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.081 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.081 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.081 17:25:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.081 17:25:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:30.081 17:25:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.081 17:25:59 -- host/auth.sh@44 -- # digest=sha512 00:22:30.081 17:25:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:30.081 17:25:59 -- host/auth.sh@44 -- # keyid=3 00:22:30.081 17:25:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:30.081 17:25:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:30.081 17:25:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:30.081 17:25:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:30.081 17:25:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:22:30.081 17:25:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.081 17:25:59 -- host/auth.sh@68 -- # digest=sha512 00:22:30.081 17:25:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:30.081 17:25:59 -- host/auth.sh@68 -- # keyid=3 00:22:30.081 17:25:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:30.081 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.081 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 17:25:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.081 17:25:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.081 17:25:59 -- nvmf/common.sh@717 -- # local ip 00:22:30.081 17:25:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.081 17:25:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.081 17:25:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.081 17:25:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.081 17:25:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.081 17:25:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.081 17:25:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.081 17:25:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.081 17:25:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.081 17:25:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:30.081 17:25:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.081 17:25:59 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 nvme0n1 00:22:30.081 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.081 17:26:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.081 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.081 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.081 17:26:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.081 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.339 17:26:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.339 17:26:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.339 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.339 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.339 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.339 17:26:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.339 17:26:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:30.339 17:26:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.339 17:26:00 -- host/auth.sh@44 -- # digest=sha512 00:22:30.339 17:26:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:30.339 17:26:00 -- host/auth.sh@44 -- # keyid=4 00:22:30.339 17:26:00 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:30.339 17:26:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:30.339 17:26:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:30.339 17:26:00 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:30.339 17:26:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:22:30.339 17:26:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.339 17:26:00 -- host/auth.sh@68 -- # digest=sha512 00:22:30.339 17:26:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:30.339 17:26:00 -- host/auth.sh@68 -- # keyid=4 00:22:30.339 17:26:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:30.339 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.339 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.339 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.339 17:26:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.339 17:26:00 -- nvmf/common.sh@717 -- # local ip 00:22:30.339 17:26:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.339 17:26:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.340 17:26:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.340 17:26:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.340 17:26:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.340 17:26:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.340 17:26:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.340 17:26:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.340 17:26:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.340 17:26:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:30.340 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.340 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.340 nvme0n1 00:22:30.340 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.340 17:26:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.340 17:26:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.340 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.340 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.340 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.598 17:26:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.599 17:26:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.599 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.599 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.599 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.599 17:26:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.599 17:26:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.599 17:26:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:30.599 17:26:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.599 17:26:00 -- host/auth.sh@44 -- # digest=sha512 00:22:30.599 17:26:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:30.599 17:26:00 -- host/auth.sh@44 -- # keyid=0 00:22:30.599 17:26:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:30.599 17:26:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:30.599 17:26:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:30.599 17:26:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:30.599 17:26:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:22:30.599 17:26:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.599 17:26:00 -- host/auth.sh@68 -- # digest=sha512 00:22:30.599 17:26:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:30.599 17:26:00 -- host/auth.sh@68 -- # keyid=0 00:22:30.599 17:26:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.599 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.599 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.599 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.599 17:26:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.599 17:26:00 -- nvmf/common.sh@717 -- # local ip 00:22:30.599 17:26:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.599 17:26:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.599 17:26:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.599 17:26:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.599 17:26:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.599 17:26:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.599 17:26:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.599 17:26:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.599 17:26:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.599 17:26:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:30.599 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.599 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.857 nvme0n1 00:22:30.857 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.857 17:26:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.857 17:26:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.857 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.857 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.857 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.857 17:26:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.857 17:26:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.857 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.857 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.857 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.857 17:26:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.857 17:26:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:30.857 17:26:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.857 17:26:00 -- host/auth.sh@44 -- # digest=sha512 00:22:30.857 17:26:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:30.858 17:26:00 -- host/auth.sh@44 -- # keyid=1 00:22:30.858 17:26:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:30.858 17:26:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:30.858 17:26:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:30.858 17:26:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:30.858 17:26:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:22:30.858 17:26:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.858 17:26:00 -- host/auth.sh@68 -- # digest=sha512 00:22:30.858 17:26:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:30.858 17:26:00 -- host/auth.sh@68 -- # keyid=1 00:22:30.858 17:26:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.858 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.858 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.858 17:26:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.858 17:26:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.858 17:26:00 -- nvmf/common.sh@717 -- # local ip 00:22:30.858 17:26:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.858 17:26:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.858 17:26:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.858 17:26:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.858 17:26:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.858 17:26:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.858 17:26:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.858 17:26:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.858 17:26:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.858 17:26:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:30.858 17:26:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.858 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:22:31.116 nvme0n1 00:22:31.116 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.116 17:26:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.116 17:26:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.116 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.116 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.116 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.374 17:26:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.374 17:26:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.374 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.374 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.374 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.374 17:26:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:31.374 17:26:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:31.374 17:26:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:31.374 17:26:01 -- host/auth.sh@44 -- # digest=sha512 00:22:31.374 17:26:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:31.374 17:26:01 -- host/auth.sh@44 -- # keyid=2 00:22:31.374 17:26:01 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:31.374 17:26:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:31.374 17:26:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:31.374 17:26:01 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:31.374 17:26:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:22:31.374 17:26:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:31.374 17:26:01 -- host/auth.sh@68 -- # digest=sha512 00:22:31.374 17:26:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:31.374 17:26:01 -- host/auth.sh@68 -- # keyid=2 00:22:31.374 17:26:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.374 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.374 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.374 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.374 17:26:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:31.374 17:26:01 -- nvmf/common.sh@717 -- # local ip 00:22:31.374 17:26:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.374 17:26:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.374 17:26:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.374 17:26:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.374 17:26:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:31.374 17:26:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.374 17:26:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:31.374 17:26:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:31.374 17:26:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:31.374 17:26:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:31.374 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.374 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.633 nvme0n1 00:22:31.633 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.633 17:26:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.633 17:26:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.633 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.633 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.633 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.633 17:26:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.633 17:26:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.633 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.633 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.633 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.633 17:26:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:31.633 17:26:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:31.633 17:26:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:31.633 17:26:01 -- host/auth.sh@44 -- # digest=sha512 00:22:31.633 17:26:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:31.633 17:26:01 -- host/auth.sh@44 -- # keyid=3 00:22:31.633 17:26:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:31.633 17:26:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:31.633 17:26:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:31.633 17:26:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:31.633 17:26:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:22:31.633 17:26:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:31.633 17:26:01 -- host/auth.sh@68 -- # digest=sha512 00:22:31.633 17:26:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:31.633 17:26:01 -- host/auth.sh@68 -- # keyid=3 00:22:31.633 17:26:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:31.633 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.633 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.633 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.633 17:26:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:31.633 17:26:01 -- nvmf/common.sh@717 -- # local ip 00:22:31.633 17:26:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.633 17:26:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.633 17:26:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.633 17:26:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.633 17:26:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:31.633 17:26:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.633 17:26:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:31.633 17:26:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:31.633 17:26:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:31.633 17:26:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:31.633 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.633 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.892 nvme0n1 00:22:31.892 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.892 17:26:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.892 17:26:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.892 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.892 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:31.892 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.892 17:26:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.892 17:26:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.892 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.892 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:32.151 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.151 17:26:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.151 17:26:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:32.151 17:26:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.151 17:26:01 -- host/auth.sh@44 -- # digest=sha512 00:22:32.151 17:26:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:32.151 17:26:01 -- host/auth.sh@44 -- # keyid=4 00:22:32.151 17:26:01 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:32.151 17:26:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:32.151 17:26:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:32.151 17:26:01 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:32.151 17:26:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:22:32.151 17:26:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.151 17:26:01 -- host/auth.sh@68 -- # digest=sha512 00:22:32.151 17:26:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:32.151 17:26:01 -- host/auth.sh@68 -- # keyid=4 00:22:32.151 17:26:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.151 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.151 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:32.151 17:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.151 17:26:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.151 17:26:01 -- nvmf/common.sh@717 -- # local ip 00:22:32.151 17:26:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.151 17:26:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.151 17:26:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.151 17:26:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.151 17:26:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.151 17:26:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.151 17:26:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.151 17:26:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.151 17:26:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.151 17:26:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.151 17:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.151 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:22:32.410 nvme0n1 00:22:32.410 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.410 17:26:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.410 17:26:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:32.410 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.410 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.410 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.410 17:26:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.410 17:26:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.410 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.410 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.410 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.410 17:26:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.410 17:26:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.410 17:26:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:32.410 17:26:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.410 17:26:02 -- host/auth.sh@44 -- # digest=sha512 00:22:32.410 17:26:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.410 17:26:02 -- host/auth.sh@44 -- # keyid=0 00:22:32.410 17:26:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:32.410 17:26:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:32.410 17:26:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:32.410 17:26:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YWIwN2YyZTMxZjcyMDBiMjhhN2U2ZDY3NzUyMzA2ZTTnL7pO: 00:22:32.410 17:26:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:22:32.410 17:26:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.410 17:26:02 -- host/auth.sh@68 -- # digest=sha512 00:22:32.410 17:26:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:32.410 17:26:02 -- host/auth.sh@68 -- # keyid=0 00:22:32.410 17:26:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.410 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.410 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.410 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.410 17:26:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.410 17:26:02 -- nvmf/common.sh@717 -- # local ip 00:22:32.410 17:26:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.410 17:26:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.410 17:26:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.410 17:26:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.410 17:26:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.410 17:26:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.410 17:26:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.410 17:26:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.410 17:26:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.410 17:26:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:32.410 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.410 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.977 nvme0n1 00:22:32.977 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.977 17:26:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.977 17:26:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:32.977 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.977 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.977 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.977 17:26:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.977 17:26:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.977 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.977 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.977 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.977 17:26:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.977 17:26:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:32.977 17:26:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.977 17:26:02 -- host/auth.sh@44 -- # digest=sha512 00:22:32.977 17:26:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.977 17:26:02 -- host/auth.sh@44 -- # keyid=1 00:22:32.977 17:26:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:32.977 17:26:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:32.977 17:26:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:32.977 17:26:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:32.977 17:26:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:22:32.977 17:26:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.977 17:26:02 -- host/auth.sh@68 -- # digest=sha512 00:22:32.977 17:26:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:32.977 17:26:02 -- host/auth.sh@68 -- # keyid=1 00:22:32.977 17:26:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.977 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.977 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.977 17:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.977 17:26:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.977 17:26:02 -- nvmf/common.sh@717 -- # local ip 00:22:32.977 17:26:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.977 17:26:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.977 17:26:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.977 17:26:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.978 17:26:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.978 17:26:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.978 17:26:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.978 17:26:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.978 17:26:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.978 17:26:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:32.978 17:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.978 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:22:33.567 nvme0n1 00:22:33.567 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.567 17:26:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.567 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.567 17:26:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.567 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.567 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.567 17:26:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.567 17:26:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.567 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.567 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.567 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.567 17:26:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.567 17:26:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:33.567 17:26:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.567 17:26:03 -- host/auth.sh@44 -- # digest=sha512 00:22:33.567 17:26:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:33.567 17:26:03 -- host/auth.sh@44 -- # keyid=2 00:22:33.567 17:26:03 -- host/auth.sh@45 -- # key=DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:33.567 17:26:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.567 17:26:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:33.567 17:26:03 -- host/auth.sh@49 -- # echo DHHC-1:01:OTY4MzIxNjg3MmUxNWNkMzUzNGEzZGE1YzdlNTE2ZDQeSsTs: 00:22:33.567 17:26:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:22:33.567 17:26:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.567 17:26:03 -- host/auth.sh@68 -- # digest=sha512 00:22:33.567 17:26:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:33.567 17:26:03 -- host/auth.sh@68 -- # keyid=2 00:22:33.567 17:26:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.567 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.567 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.567 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.567 17:26:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.567 17:26:03 -- nvmf/common.sh@717 -- # local ip 00:22:33.567 17:26:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.567 17:26:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.567 17:26:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.567 17:26:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.567 17:26:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.567 17:26:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.567 17:26:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.567 17:26:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.567 17:26:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.567 17:26:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:33.567 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.567 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 nvme0n1 00:22:34.133 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.133 17:26:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.133 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.133 17:26:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.133 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.133 17:26:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.133 17:26:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.133 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.133 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.133 17:26:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.133 17:26:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:34.133 17:26:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.133 17:26:03 -- host/auth.sh@44 -- # digest=sha512 00:22:34.133 17:26:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:34.133 17:26:03 -- host/auth.sh@44 -- # keyid=3 00:22:34.133 17:26:03 -- host/auth.sh@45 -- # key=DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:34.133 17:26:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.133 17:26:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:34.133 17:26:03 -- host/auth.sh@49 -- # echo DHHC-1:02:NzczODZiMjJiNTMwZDFhYjgzNTVlNTlhYTZiOGU2N2MyMGY2ZDI2OWY4ZjFkZDJjzvnImw==: 00:22:34.133 17:26:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:22:34.133 17:26:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.133 17:26:03 -- host/auth.sh@68 -- # digest=sha512 00:22:34.133 17:26:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:34.133 17:26:03 -- host/auth.sh@68 -- # keyid=3 00:22:34.133 17:26:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.133 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.133 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 17:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.133 17:26:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.133 17:26:03 -- nvmf/common.sh@717 -- # local ip 00:22:34.133 17:26:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.133 17:26:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.133 17:26:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.133 17:26:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.133 17:26:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.133 17:26:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.133 17:26:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.133 17:26:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.133 17:26:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.133 17:26:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:34.133 17:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.133 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:22:34.700 nvme0n1 00:22:34.700 17:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.700 17:26:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.700 17:26:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.700 17:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.700 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.700 17:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.700 17:26:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.700 17:26:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.700 17:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.700 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.700 17:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.700 17:26:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.700 17:26:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:34.700 17:26:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.700 17:26:04 -- host/auth.sh@44 -- # digest=sha512 00:22:34.700 17:26:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:34.700 17:26:04 -- host/auth.sh@44 -- # keyid=4 00:22:34.700 17:26:04 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:34.700 17:26:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.700 17:26:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:34.700 17:26:04 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2Q0ZjczZTBmMmI4YzE1YjQxOTY1MTczMmE3YzQ4OTdiMTc3ZGFkNGQ3MjJkNTg4N2Q2N2IxMWEyMDBjN2QzORtN+RM=: 00:22:34.701 17:26:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:22:34.701 17:26:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.701 17:26:04 -- host/auth.sh@68 -- # digest=sha512 00:22:34.701 17:26:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:34.701 17:26:04 -- host/auth.sh@68 -- # keyid=4 00:22:34.701 17:26:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.701 17:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.701 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.701 17:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.701 17:26:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.701 17:26:04 -- nvmf/common.sh@717 -- # local ip 00:22:34.701 17:26:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.701 17:26:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.701 17:26:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.701 17:26:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.701 17:26:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.701 17:26:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.701 17:26:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.701 17:26:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.701 17:26:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.701 17:26:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.701 17:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.701 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 nvme0n1 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.268 17:26:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.268 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.268 17:26:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.268 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.268 17:26:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.268 17:26:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.268 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.268 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.268 17:26:05 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:35.268 17:26:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.268 17:26:05 -- host/auth.sh@44 -- # digest=sha256 00:22:35.268 17:26:05 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:35.268 17:26:05 -- host/auth.sh@44 -- # keyid=1 00:22:35.268 17:26:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:35.268 17:26:05 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:35.268 17:26:05 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:35.268 17:26:05 -- host/auth.sh@49 -- # echo DHHC-1:00:YTFlMTU5OGQ3OTk3ZDMwOTkzMDM3ZjdlZTM5ODZlOWZiMTQ4NGE5MTJlYmQxOGEzIPu2CQ==: 00:22:35.268 17:26:05 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:35.268 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.268 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.268 17:26:05 -- host/auth.sh@119 -- # get_main_ns_ip 00:22:35.268 17:26:05 -- nvmf/common.sh@717 -- # local ip 00:22:35.268 17:26:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.268 17:26:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.268 17:26:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.268 17:26:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.268 17:26:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.268 17:26:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.268 17:26:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.268 17:26:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.268 17:26:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.268 17:26:05 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:35.268 17:26:05 -- common/autotest_common.sh@638 -- # local es=0 00:22:35.268 17:26:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:35.268 17:26:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:35.268 17:26:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:35.268 17:26:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:35.268 17:26:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:35.268 17:26:05 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:35.268 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.268 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 2024/04/25 17:26:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:35.268 request: 00:22:35.268 { 00:22:35.268 "method": "bdev_nvme_attach_controller", 00:22:35.268 "params": { 00:22:35.268 "name": "nvme0", 00:22:35.268 "trtype": "tcp", 00:22:35.268 "traddr": "10.0.0.1", 00:22:35.268 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:35.268 "adrfam": "ipv4", 00:22:35.268 "trsvcid": "4420", 00:22:35.268 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:22:35.268 } 00:22:35.268 } 00:22:35.268 Got JSON-RPC error response 00:22:35.268 GoRPCClient: error on JSON-RPC call 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:35.268 17:26:05 -- common/autotest_common.sh@641 -- # es=1 00:22:35.268 17:26:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:35.268 17:26:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:35.268 17:26:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:35.268 17:26:05 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.268 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.268 17:26:05 -- host/auth.sh@121 -- # jq length 00:22:35.268 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.268 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.527 17:26:05 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:22:35.527 17:26:05 -- host/auth.sh@124 -- # get_main_ns_ip 00:22:35.527 17:26:05 -- nvmf/common.sh@717 -- # local ip 00:22:35.527 17:26:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.527 17:26:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.527 17:26:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.527 17:26:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.527 17:26:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.527 17:26:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.527 17:26:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.527 17:26:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.527 17:26:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.527 17:26:05 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:35.527 17:26:05 -- common/autotest_common.sh@638 -- # local es=0 00:22:35.527 17:26:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:35.527 17:26:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:35.527 17:26:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:35.527 17:26:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:35.527 17:26:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:35.527 17:26:05 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:35.527 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.527 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.527 2024/04/25 17:26:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:35.527 request: 00:22:35.527 { 00:22:35.527 "method": "bdev_nvme_attach_controller", 00:22:35.527 "params": { 00:22:35.527 "name": "nvme0", 00:22:35.527 "trtype": "tcp", 00:22:35.527 "traddr": "10.0.0.1", 00:22:35.527 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:35.527 "adrfam": "ipv4", 00:22:35.527 "trsvcid": "4420", 00:22:35.527 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:35.527 "dhchap_key": "key2" 00:22:35.527 } 00:22:35.527 } 00:22:35.527 Got JSON-RPC error response 00:22:35.527 GoRPCClient: error on JSON-RPC call 00:22:35.527 17:26:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:35.527 17:26:05 -- common/autotest_common.sh@641 -- # es=1 00:22:35.527 17:26:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:35.528 17:26:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:35.528 17:26:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:35.528 17:26:05 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.528 17:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.528 17:26:05 -- host/auth.sh@127 -- # jq length 00:22:35.528 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.528 17:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.528 17:26:05 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:35.528 17:26:05 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:22:35.528 17:26:05 -- host/auth.sh@130 -- # cleanup 00:22:35.528 17:26:05 -- host/auth.sh@24 -- # nvmftestfini 00:22:35.528 17:26:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:35.528 17:26:05 -- nvmf/common.sh@117 -- # sync 00:22:35.528 17:26:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.528 17:26:05 -- nvmf/common.sh@120 -- # set +e 00:22:35.528 17:26:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.528 17:26:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.528 rmmod nvme_tcp 00:22:35.528 rmmod nvme_fabrics 00:22:35.528 17:26:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.528 17:26:05 -- nvmf/common.sh@124 -- # set -e 00:22:35.528 17:26:05 -- nvmf/common.sh@125 -- # return 0 00:22:35.528 17:26:05 -- nvmf/common.sh@478 -- # '[' -n 90693 ']' 00:22:35.528 17:26:05 -- nvmf/common.sh@479 -- # killprocess 90693 00:22:35.528 17:26:05 -- common/autotest_common.sh@936 -- # '[' -z 90693 ']' 00:22:35.528 17:26:05 -- common/autotest_common.sh@940 -- # kill -0 90693 00:22:35.528 17:26:05 -- common/autotest_common.sh@941 -- # uname 00:22:35.528 17:26:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.528 17:26:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90693 00:22:35.528 17:26:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:35.528 17:26:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:35.528 killing process with pid 90693 00:22:35.528 17:26:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90693' 00:22:35.528 17:26:05 -- common/autotest_common.sh@955 -- # kill 90693 00:22:35.528 17:26:05 -- common/autotest_common.sh@960 -- # wait 90693 00:22:35.786 17:26:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:35.786 17:26:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:35.786 17:26:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:35.786 17:26:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.786 17:26:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.786 17:26:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.786 17:26:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.786 17:26:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.786 17:26:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:35.786 17:26:05 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:35.786 17:26:05 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:35.786 17:26:05 -- host/auth.sh@27 -- # clean_kernel_target 00:22:35.786 17:26:05 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:35.786 17:26:05 -- nvmf/common.sh@675 -- # echo 0 00:22:35.786 17:26:05 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:35.786 17:26:05 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:35.786 17:26:05 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:35.786 17:26:05 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:35.786 17:26:05 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:35.786 17:26:05 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:35.786 17:26:05 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:36.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:36.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:36.722 17:26:06 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.N31 /tmp/spdk.key-null.U3w /tmp/spdk.key-sha256.qwv /tmp/spdk.key-sha384.QlE /tmp/spdk.key-sha512.0ew /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:36.722 17:26:06 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:36.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:36.980 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:36.980 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:37.240 00:22:37.240 real 0m34.921s 00:22:37.240 user 0m32.295s 00:22:37.240 sys 0m3.487s 00:22:37.240 17:26:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:37.240 17:26:06 -- common/autotest_common.sh@10 -- # set +x 00:22:37.240 ************************************ 00:22:37.240 END TEST nvmf_auth 00:22:37.240 ************************************ 00:22:37.240 17:26:07 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:22:37.240 17:26:07 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.240 17:26:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.240 17:26:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.240 17:26:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.240 ************************************ 00:22:37.240 START TEST nvmf_digest 00:22:37.240 ************************************ 00:22:37.240 17:26:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.240 * Looking for test storage... 00:22:37.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:37.240 17:26:07 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.240 17:26:07 -- nvmf/common.sh@7 -- # uname -s 00:22:37.240 17:26:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.240 17:26:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.240 17:26:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.240 17:26:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.240 17:26:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.240 17:26:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.240 17:26:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.240 17:26:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.240 17:26:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.240 17:26:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.240 17:26:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:22:37.240 17:26:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:22:37.240 17:26:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.240 17:26:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.241 17:26:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.241 17:26:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.241 17:26:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.241 17:26:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.241 17:26:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.241 17:26:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.241 17:26:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.501 17:26:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.501 17:26:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.501 17:26:07 -- paths/export.sh@5 -- # export PATH 00:22:37.501 17:26:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.501 17:26:07 -- nvmf/common.sh@47 -- # : 0 00:22:37.501 17:26:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.501 17:26:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.501 17:26:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.501 17:26:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.501 17:26:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.501 17:26:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.501 17:26:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.501 17:26:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.501 17:26:07 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:37.501 17:26:07 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:37.501 17:26:07 -- host/digest.sh@16 -- # runtime=2 00:22:37.501 17:26:07 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:37.501 17:26:07 -- host/digest.sh@138 -- # nvmftestinit 00:22:37.501 17:26:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:37.501 17:26:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.501 17:26:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:37.501 17:26:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:37.501 17:26:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:37.501 17:26:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.501 17:26:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.501 17:26:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.501 17:26:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:37.501 17:26:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:37.501 17:26:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:37.501 17:26:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:37.501 17:26:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:37.501 17:26:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:37.501 17:26:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.501 17:26:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.501 17:26:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.501 17:26:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:37.501 17:26:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.501 17:26:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.501 17:26:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.501 17:26:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.501 17:26:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.501 17:26:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.501 17:26:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.501 17:26:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.501 17:26:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:37.501 17:26:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:37.501 Cannot find device "nvmf_tgt_br" 00:22:37.501 17:26:07 -- nvmf/common.sh@155 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.501 Cannot find device "nvmf_tgt_br2" 00:22:37.501 17:26:07 -- nvmf/common.sh@156 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:37.501 17:26:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:37.501 Cannot find device "nvmf_tgt_br" 00:22:37.501 17:26:07 -- nvmf/common.sh@158 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:37.501 Cannot find device "nvmf_tgt_br2" 00:22:37.501 17:26:07 -- nvmf/common.sh@159 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:37.501 17:26:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:37.501 17:26:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.501 17:26:07 -- nvmf/common.sh@162 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.501 17:26:07 -- nvmf/common.sh@163 -- # true 00:22:37.501 17:26:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.501 17:26:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.501 17:26:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.501 17:26:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.501 17:26:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.501 17:26:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.501 17:26:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.501 17:26:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.501 17:26:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.501 17:26:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:37.501 17:26:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:37.501 17:26:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:37.501 17:26:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:37.501 17:26:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.501 17:26:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.501 17:26:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.501 17:26:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:37.761 17:26:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:37.761 17:26:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.761 17:26:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.761 17:26:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.761 17:26:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.761 17:26:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.761 17:26:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:37.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:22:37.761 00:22:37.761 --- 10.0.0.2 ping statistics --- 00:22:37.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.761 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:37.761 17:26:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:37.761 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.761 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:37.761 00:22:37.761 --- 10.0.0.3 ping statistics --- 00:22:37.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.761 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:37.761 17:26:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:37.761 00:22:37.761 --- 10.0.0.1 ping statistics --- 00:22:37.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.761 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:37.761 17:26:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.761 17:26:07 -- nvmf/common.sh@422 -- # return 0 00:22:37.761 17:26:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:37.761 17:26:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.761 17:26:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:37.761 17:26:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:37.761 17:26:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.761 17:26:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:37.761 17:26:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:37.761 17:26:07 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:37.761 17:26:07 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:37.761 17:26:07 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:37.761 17:26:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:37.761 17:26:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.761 17:26:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.761 ************************************ 00:22:37.761 START TEST nvmf_digest_clean 00:22:37.761 ************************************ 00:22:37.761 17:26:07 -- common/autotest_common.sh@1111 -- # run_digest 00:22:37.761 17:26:07 -- host/digest.sh@120 -- # local dsa_initiator 00:22:37.761 17:26:07 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:37.761 17:26:07 -- host/digest.sh@121 -- # dsa_initiator=false 00:22:37.761 17:26:07 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:37.761 17:26:07 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:37.761 17:26:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:37.761 17:26:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.761 17:26:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.761 17:26:07 -- nvmf/common.sh@470 -- # nvmfpid=92274 00:22:37.761 17:26:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:37.762 17:26:07 -- nvmf/common.sh@471 -- # waitforlisten 92274 00:22:37.762 17:26:07 -- common/autotest_common.sh@817 -- # '[' -z 92274 ']' 00:22:37.762 17:26:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.762 17:26:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.762 17:26:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.762 17:26:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.762 17:26:07 -- common/autotest_common.sh@10 -- # set +x 00:22:37.762 [2024-04-25 17:26:07.723504] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:37.762 [2024-04-25 17:26:07.723588] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.021 [2024-04-25 17:26:07.859921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.021 [2024-04-25 17:26:07.928220] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.021 [2024-04-25 17:26:07.928289] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.021 [2024-04-25 17:26:07.928305] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.021 [2024-04-25 17:26:07.928315] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.021 [2024-04-25 17:26:07.928325] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.021 [2024-04-25 17:26:07.928360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.958 17:26:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.958 17:26:08 -- common/autotest_common.sh@850 -- # return 0 00:22:38.958 17:26:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:38.958 17:26:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.958 17:26:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 17:26:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.958 17:26:08 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:38.958 17:26:08 -- host/digest.sh@126 -- # common_target_config 00:22:38.958 17:26:08 -- host/digest.sh@43 -- # rpc_cmd 00:22:38.958 17:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.958 17:26:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 null0 00:22:38.958 [2024-04-25 17:26:08.811800] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.958 [2024-04-25 17:26:08.835925] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.958 17:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.958 17:26:08 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:38.958 17:26:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:38.958 17:26:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:38.958 17:26:08 -- host/digest.sh@80 -- # rw=randread 00:22:38.958 17:26:08 -- host/digest.sh@80 -- # bs=4096 00:22:38.958 17:26:08 -- host/digest.sh@80 -- # qd=128 00:22:38.958 17:26:08 -- host/digest.sh@80 -- # scan_dsa=false 00:22:38.958 17:26:08 -- host/digest.sh@83 -- # bperfpid=92323 00:22:38.958 17:26:08 -- host/digest.sh@84 -- # waitforlisten 92323 /var/tmp/bperf.sock 00:22:38.958 17:26:08 -- common/autotest_common.sh@817 -- # '[' -z 92323 ']' 00:22:38.958 17:26:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:38.958 17:26:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.958 17:26:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:38.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.958 17:26:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.958 17:26:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:38.958 17:26:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.958 [2024-04-25 17:26:08.900699] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:38.958 [2024-04-25 17:26:08.900825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92323 ] 00:22:39.217 [2024-04-25 17:26:09.042819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.218 [2024-04-25 17:26:09.110936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.160 17:26:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:40.161 17:26:09 -- common/autotest_common.sh@850 -- # return 0 00:22:40.161 17:26:09 -- host/digest.sh@86 -- # false 00:22:40.161 17:26:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:40.161 17:26:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:40.161 17:26:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.161 17:26:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.424 nvme0n1 00:22:40.424 17:26:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:40.424 17:26:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.681 Running I/O for 2 seconds... 00:22:42.583 00:22:42.583 Latency(us) 00:22:42.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.583 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:42.583 nvme0n1 : 2.00 22261.98 86.96 0.00 0.00 5743.88 3127.85 17635.14 00:22:42.583 =================================================================================================================== 00:22:42.583 Total : 22261.98 86.96 0.00 0.00 5743.88 3127.85 17635.14 00:22:42.583 0 00:22:42.583 17:26:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:42.583 17:26:12 -- host/digest.sh@93 -- # get_accel_stats 00:22:42.583 17:26:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:42.583 17:26:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:42.583 | select(.opcode=="crc32c") 00:22:42.583 | "\(.module_name) \(.executed)"' 00:22:42.583 17:26:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:42.842 17:26:12 -- host/digest.sh@94 -- # false 00:22:42.842 17:26:12 -- host/digest.sh@94 -- # exp_module=software 00:22:42.842 17:26:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:42.842 17:26:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:42.842 17:26:12 -- host/digest.sh@98 -- # killprocess 92323 00:22:42.842 17:26:12 -- common/autotest_common.sh@936 -- # '[' -z 92323 ']' 00:22:42.842 17:26:12 -- common/autotest_common.sh@940 -- # kill -0 92323 00:22:42.842 17:26:12 -- common/autotest_common.sh@941 -- # uname 00:22:42.842 17:26:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.842 17:26:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92323 00:22:42.842 17:26:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:42.842 17:26:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:42.842 killing process with pid 92323 00:22:42.842 17:26:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92323' 00:22:42.842 17:26:12 -- common/autotest_common.sh@955 -- # kill 92323 00:22:42.842 Received shutdown signal, test time was about 2.000000 seconds 00:22:42.842 00:22:42.842 Latency(us) 00:22:42.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.842 =================================================================================================================== 00:22:42.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.842 17:26:12 -- common/autotest_common.sh@960 -- # wait 92323 00:22:43.100 17:26:12 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:43.100 17:26:12 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:43.100 17:26:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:43.100 17:26:12 -- host/digest.sh@80 -- # rw=randread 00:22:43.100 17:26:12 -- host/digest.sh@80 -- # bs=131072 00:22:43.100 17:26:12 -- host/digest.sh@80 -- # qd=16 00:22:43.100 17:26:12 -- host/digest.sh@80 -- # scan_dsa=false 00:22:43.100 17:26:12 -- host/digest.sh@83 -- # bperfpid=92409 00:22:43.100 17:26:12 -- host/digest.sh@84 -- # waitforlisten 92409 /var/tmp/bperf.sock 00:22:43.100 17:26:12 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:43.100 17:26:12 -- common/autotest_common.sh@817 -- # '[' -z 92409 ']' 00:22:43.100 17:26:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.100 17:26:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:43.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.101 17:26:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.101 17:26:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:43.101 17:26:12 -- common/autotest_common.sh@10 -- # set +x 00:22:43.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.101 Zero copy mechanism will not be used. 00:22:43.101 [2024-04-25 17:26:12.940262] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:43.101 [2024-04-25 17:26:12.940373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92409 ] 00:22:43.101 [2024-04-25 17:26:13.072803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.359 [2024-04-25 17:26:13.123782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.359 17:26:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.359 17:26:13 -- common/autotest_common.sh@850 -- # return 0 00:22:43.359 17:26:13 -- host/digest.sh@86 -- # false 00:22:43.359 17:26:13 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:43.359 17:26:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:43.618 17:26:13 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.618 17:26:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.876 nvme0n1 00:22:43.877 17:26:13 -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:43.877 17:26:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.877 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.877 Zero copy mechanism will not be used. 00:22:43.877 Running I/O for 2 seconds... 00:22:46.409 00:22:46.409 Latency(us) 00:22:46.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.409 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:46.409 nvme0n1 : 2.00 9037.46 1129.68 0.00 0.00 1767.27 506.41 6285.50 00:22:46.409 =================================================================================================================== 00:22:46.409 Total : 9037.46 1129.68 0.00 0.00 1767.27 506.41 6285.50 00:22:46.409 0 00:22:46.409 17:26:15 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:46.409 17:26:15 -- host/digest.sh@93 -- # get_accel_stats 00:22:46.409 17:26:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:46.409 17:26:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:46.409 17:26:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:46.409 | select(.opcode=="crc32c") 00:22:46.409 | "\(.module_name) \(.executed)"' 00:22:46.409 17:26:16 -- host/digest.sh@94 -- # false 00:22:46.409 17:26:16 -- host/digest.sh@94 -- # exp_module=software 00:22:46.409 17:26:16 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:46.409 17:26:16 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:46.409 17:26:16 -- host/digest.sh@98 -- # killprocess 92409 00:22:46.409 17:26:16 -- common/autotest_common.sh@936 -- # '[' -z 92409 ']' 00:22:46.409 17:26:16 -- common/autotest_common.sh@940 -- # kill -0 92409 00:22:46.409 17:26:16 -- common/autotest_common.sh@941 -- # uname 00:22:46.409 17:26:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.409 17:26:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92409 00:22:46.409 17:26:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.409 17:26:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.409 killing process with pid 92409 00:22:46.409 17:26:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92409' 00:22:46.409 17:26:16 -- common/autotest_common.sh@955 -- # kill 92409 00:22:46.409 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.409 00:22:46.409 Latency(us) 00:22:46.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.409 =================================================================================================================== 00:22:46.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.409 17:26:16 -- common/autotest_common.sh@960 -- # wait 92409 00:22:46.409 17:26:16 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:46.409 17:26:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:46.409 17:26:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:46.409 17:26:16 -- host/digest.sh@80 -- # rw=randwrite 00:22:46.409 17:26:16 -- host/digest.sh@80 -- # bs=4096 00:22:46.409 17:26:16 -- host/digest.sh@80 -- # qd=128 00:22:46.409 17:26:16 -- host/digest.sh@80 -- # scan_dsa=false 00:22:46.409 17:26:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:46.409 17:26:16 -- host/digest.sh@83 -- # bperfpid=92479 00:22:46.409 17:26:16 -- host/digest.sh@84 -- # waitforlisten 92479 /var/tmp/bperf.sock 00:22:46.409 17:26:16 -- common/autotest_common.sh@817 -- # '[' -z 92479 ']' 00:22:46.409 17:26:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:46.409 17:26:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:46.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:46.409 17:26:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:46.409 17:26:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:46.409 17:26:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.409 [2024-04-25 17:26:16.285049] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:46.409 [2024-04-25 17:26:16.285132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92479 ] 00:22:46.680 [2024-04-25 17:26:16.418128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.680 [2024-04-25 17:26:16.471837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.680 17:26:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.680 17:26:16 -- common/autotest_common.sh@850 -- # return 0 00:22:46.680 17:26:16 -- host/digest.sh@86 -- # false 00:22:46.680 17:26:16 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:46.680 17:26:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:46.945 17:26:16 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.945 17:26:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.203 nvme0n1 00:22:47.203 17:26:17 -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:47.203 17:26:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:47.203 Running I/O for 2 seconds... 00:22:49.733 00:22:49.733 Latency(us) 00:22:49.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.733 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:49.733 nvme0n1 : 2.00 26037.44 101.71 0.00 0.00 4911.32 2427.81 11558.17 00:22:49.733 =================================================================================================================== 00:22:49.733 Total : 26037.44 101.71 0.00 0.00 4911.32 2427.81 11558.17 00:22:49.733 0 00:22:49.733 17:26:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:49.733 17:26:19 -- host/digest.sh@93 -- # get_accel_stats 00:22:49.733 17:26:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:49.733 17:26:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:49.733 17:26:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:49.733 | select(.opcode=="crc32c") 00:22:49.733 | "\(.module_name) \(.executed)"' 00:22:49.733 17:26:19 -- host/digest.sh@94 -- # false 00:22:49.733 17:26:19 -- host/digest.sh@94 -- # exp_module=software 00:22:49.733 17:26:19 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:49.733 17:26:19 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:49.733 17:26:19 -- host/digest.sh@98 -- # killprocess 92479 00:22:49.733 17:26:19 -- common/autotest_common.sh@936 -- # '[' -z 92479 ']' 00:22:49.733 17:26:19 -- common/autotest_common.sh@940 -- # kill -0 92479 00:22:49.733 17:26:19 -- common/autotest_common.sh@941 -- # uname 00:22:49.733 17:26:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.733 17:26:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92479 00:22:49.733 17:26:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:49.733 17:26:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:49.733 killing process with pid 92479 00:22:49.733 17:26:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92479' 00:22:49.733 Received shutdown signal, test time was about 2.000000 seconds 00:22:49.733 00:22:49.733 Latency(us) 00:22:49.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.734 =================================================================================================================== 00:22:49.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:49.734 17:26:19 -- common/autotest_common.sh@955 -- # kill 92479 00:22:49.734 17:26:19 -- common/autotest_common.sh@960 -- # wait 92479 00:22:49.734 17:26:19 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:49.734 17:26:19 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:49.734 17:26:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:49.734 17:26:19 -- host/digest.sh@80 -- # rw=randwrite 00:22:49.734 17:26:19 -- host/digest.sh@80 -- # bs=131072 00:22:49.734 17:26:19 -- host/digest.sh@80 -- # qd=16 00:22:49.734 17:26:19 -- host/digest.sh@80 -- # scan_dsa=false 00:22:49.734 17:26:19 -- host/digest.sh@83 -- # bperfpid=92552 00:22:49.734 17:26:19 -- host/digest.sh@84 -- # waitforlisten 92552 /var/tmp/bperf.sock 00:22:49.734 17:26:19 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:49.734 17:26:19 -- common/autotest_common.sh@817 -- # '[' -z 92552 ']' 00:22:49.734 17:26:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:49.734 17:26:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:49.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:49.734 17:26:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:49.734 17:26:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:49.734 17:26:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.734 [2024-04-25 17:26:19.710519] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:49.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:49.992 Zero copy mechanism will not be used. 00:22:49.992 [2024-04-25 17:26:19.711185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92552 ] 00:22:49.992 [2024-04-25 17:26:19.846106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.992 [2024-04-25 17:26:19.903105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.928 17:26:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:50.928 17:26:20 -- common/autotest_common.sh@850 -- # return 0 00:22:50.928 17:26:20 -- host/digest.sh@86 -- # false 00:22:50.928 17:26:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:50.928 17:26:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:51.186 17:26:20 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.186 17:26:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.186 nvme0n1 00:22:51.445 17:26:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:51.445 17:26:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:51.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:51.445 Zero copy mechanism will not be used. 00:22:51.445 Running I/O for 2 seconds... 00:22:53.347 00:22:53.347 Latency(us) 00:22:53.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:53.347 nvme0n1 : 2.00 7284.96 910.62 0.00 0.00 2191.49 1750.11 9175.04 00:22:53.347 =================================================================================================================== 00:22:53.347 Total : 7284.96 910.62 0.00 0.00 2191.49 1750.11 9175.04 00:22:53.347 0 00:22:53.347 17:26:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:53.347 17:26:23 -- host/digest.sh@93 -- # get_accel_stats 00:22:53.347 17:26:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:53.347 17:26:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:53.347 17:26:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:53.347 | select(.opcode=="crc32c") 00:22:53.347 | "\(.module_name) \(.executed)"' 00:22:53.605 17:26:23 -- host/digest.sh@94 -- # false 00:22:53.605 17:26:23 -- host/digest.sh@94 -- # exp_module=software 00:22:53.605 17:26:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:53.605 17:26:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:53.605 17:26:23 -- host/digest.sh@98 -- # killprocess 92552 00:22:53.605 17:26:23 -- common/autotest_common.sh@936 -- # '[' -z 92552 ']' 00:22:53.605 17:26:23 -- common/autotest_common.sh@940 -- # kill -0 92552 00:22:53.605 17:26:23 -- common/autotest_common.sh@941 -- # uname 00:22:53.605 17:26:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.605 17:26:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92552 00:22:53.605 17:26:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.605 17:26:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.605 17:26:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92552' 00:22:53.605 killing process with pid 92552 00:22:53.605 17:26:23 -- common/autotest_common.sh@955 -- # kill 92552 00:22:53.605 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.605 00:22:53.605 Latency(us) 00:22:53.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.605 =================================================================================================================== 00:22:53.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.605 17:26:23 -- common/autotest_common.sh@960 -- # wait 92552 00:22:53.863 17:26:23 -- host/digest.sh@132 -- # killprocess 92274 00:22:53.863 17:26:23 -- common/autotest_common.sh@936 -- # '[' -z 92274 ']' 00:22:53.863 17:26:23 -- common/autotest_common.sh@940 -- # kill -0 92274 00:22:53.863 17:26:23 -- common/autotest_common.sh@941 -- # uname 00:22:53.863 17:26:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.863 17:26:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92274 00:22:53.863 17:26:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:53.863 17:26:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:53.863 killing process with pid 92274 00:22:53.863 17:26:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92274' 00:22:53.863 17:26:23 -- common/autotest_common.sh@955 -- # kill 92274 00:22:53.863 17:26:23 -- common/autotest_common.sh@960 -- # wait 92274 00:22:54.121 00:22:54.121 real 0m16.257s 00:22:54.121 user 0m30.406s 00:22:54.121 sys 0m4.164s 00:22:54.122 17:26:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:54.122 ************************************ 00:22:54.122 END TEST nvmf_digest_clean 00:22:54.122 ************************************ 00:22:54.122 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:22:54.122 17:26:23 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:54.122 17:26:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:54.122 17:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.122 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:22:54.122 ************************************ 00:22:54.122 START TEST nvmf_digest_error 00:22:54.122 ************************************ 00:22:54.122 17:26:24 -- common/autotest_common.sh@1111 -- # run_digest_error 00:22:54.122 17:26:24 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:54.122 17:26:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:54.122 17:26:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:54.122 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:22:54.122 17:26:24 -- nvmf/common.sh@470 -- # nvmfpid=92669 00:22:54.122 17:26:24 -- nvmf/common.sh@471 -- # waitforlisten 92669 00:22:54.122 17:26:24 -- common/autotest_common.sh@817 -- # '[' -z 92669 ']' 00:22:54.122 17:26:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.122 17:26:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.122 17:26:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.122 17:26:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.122 17:26:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.122 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:22:54.122 [2024-04-25 17:26:24.087985] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:54.122 [2024-04-25 17:26:24.088065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.380 [2024-04-25 17:26:24.222375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.380 [2024-04-25 17:26:24.268220] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.380 [2024-04-25 17:26:24.268297] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.380 [2024-04-25 17:26:24.268309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.380 [2024-04-25 17:26:24.268316] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.380 [2024-04-25 17:26:24.268322] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.380 [2024-04-25 17:26:24.268353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.315 17:26:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.315 17:26:24 -- common/autotest_common.sh@850 -- # return 0 00:22:55.315 17:26:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:55.315 17:26:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:55.315 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 17:26:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.315 17:26:25 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:55.315 17:26:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:55.315 17:26:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 [2024-04-25 17:26:25.024848] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:55.315 17:26:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:55.315 17:26:25 -- host/digest.sh@105 -- # common_target_config 00:22:55.315 17:26:25 -- host/digest.sh@43 -- # rpc_cmd 00:22:55.315 17:26:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:55.315 17:26:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.315 null0 00:22:55.315 [2024-04-25 17:26:25.090394] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.316 [2024-04-25 17:26:25.114497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.316 17:26:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:55.316 17:26:25 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:55.316 17:26:25 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.316 17:26:25 -- host/digest.sh@56 -- # rw=randread 00:22:55.316 17:26:25 -- host/digest.sh@56 -- # bs=4096 00:22:55.316 17:26:25 -- host/digest.sh@56 -- # qd=128 00:22:55.316 17:26:25 -- host/digest.sh@58 -- # bperfpid=92713 00:22:55.316 17:26:25 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:55.316 17:26:25 -- host/digest.sh@60 -- # waitforlisten 92713 /var/tmp/bperf.sock 00:22:55.316 17:26:25 -- common/autotest_common.sh@817 -- # '[' -z 92713 ']' 00:22:55.316 17:26:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.316 17:26:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.316 17:26:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.316 17:26:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.316 17:26:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.316 [2024-04-25 17:26:25.178306] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:55.316 [2024-04-25 17:26:25.178400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92713 ] 00:22:55.574 [2024-04-25 17:26:25.319405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.574 [2024-04-25 17:26:25.387652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.509 17:26:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.509 17:26:26 -- common/autotest_common.sh@850 -- # return 0 00:22:56.509 17:26:26 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.509 17:26:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:56.509 17:26:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:56.509 17:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.509 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:56.509 17:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.509 17:26:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.509 17:26:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:56.767 nvme0n1 00:22:56.767 17:26:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:56.767 17:26:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.767 17:26:26 -- common/autotest_common.sh@10 -- # set +x 00:22:56.767 17:26:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.767 17:26:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:56.767 17:26:26 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.026 Running I/O for 2 seconds... 00:22:57.026 [2024-04-25 17:26:26.795900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.795957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.795970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.806600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.806650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.806662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.818475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.818535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.830666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.830714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.830737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.842245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.842293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.842305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.854198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.854245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.854257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.864036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.876626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.876690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.887670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.887727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.887741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.898301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.898348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.898360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.910497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.910544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.910555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.922033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.922082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.922094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.934290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.934339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.934351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.945342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.945390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.945401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.958546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.958605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.970316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.970374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.980071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.980119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.980131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.026 [2024-04-25 17:26:26.992439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.026 [2024-04-25 17:26:26.992487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.026 [2024-04-25 17:26:26.992499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.004820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.004903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.016040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.016087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.016098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.028227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.028298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.028327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.039918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.039965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.039976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.050355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.050402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.050414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.062123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.062171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.062183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.074559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.074606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.074618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.086624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.086672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.086683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.098819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.098868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.098880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.110225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.110272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.110284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.120378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.120426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.132379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.132434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.132447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.145232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.145278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.145290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.156695] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.156763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.156776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.168769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.168825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.168837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.179929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.179976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.179988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.190453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.190500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.190512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.200063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.286 [2024-04-25 17:26:27.200109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.286 [2024-04-25 17:26:27.200121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.286 [2024-04-25 17:26:27.211926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.287 [2024-04-25 17:26:27.211972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.287 [2024-04-25 17:26:27.211983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.287 [2024-04-25 17:26:27.224783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.287 [2024-04-25 17:26:27.224840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.287 [2024-04-25 17:26:27.224852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.287 [2024-04-25 17:26:27.234839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.287 [2024-04-25 17:26:27.234882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.287 [2024-04-25 17:26:27.234895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.287 [2024-04-25 17:26:27.249113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.287 [2024-04-25 17:26:27.249160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.287 [2024-04-25 17:26:27.249172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.287 [2024-04-25 17:26:27.259186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.287 [2024-04-25 17:26:27.259234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.287 [2024-04-25 17:26:27.259247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.545 [2024-04-25 17:26:27.273446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.273494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.273506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.286562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.286612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.286625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.300315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.300366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.300379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.313103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.313137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.313149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.324540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.324590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.324603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.335915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.335962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.335973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.349471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.349518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.349530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.359888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.359937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.359949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.371897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.371944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.371956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.384078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.384125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.384137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.397010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.397071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.409832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.409882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.409894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.419999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.420046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.420058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.433651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.433699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.433711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.445835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.445884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.458315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.458362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.458373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.468937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.468984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.468996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.481628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.481676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.481688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.494464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.494511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.494522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.505312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.546 [2024-04-25 17:26:27.517175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.546 [2024-04-25 17:26:27.517222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.546 [2024-04-25 17:26:27.517234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.530794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.530839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.530852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.540853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.540900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.540911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.552205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.552253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.552264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.566093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.566156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.566168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.576247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.576317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.576346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.588630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.588693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.588705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.599964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.600010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.600022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.611904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.611952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.611964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.623019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.623052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.623064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.634441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.634491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.634503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.649831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.649882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.649896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.663197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.663244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.663256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.677739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.677786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.677798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.687986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.688033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.688045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.699936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.699982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.699994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.711585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.711633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.711644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.723097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.723156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.734812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.734858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.806 [2024-04-25 17:26:27.734870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.806 [2024-04-25 17:26:27.747016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.806 [2024-04-25 17:26:27.747063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.807 [2024-04-25 17:26:27.747074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.807 [2024-04-25 17:26:27.757179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.807 [2024-04-25 17:26:27.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.807 [2024-04-25 17:26:27.757238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.807 [2024-04-25 17:26:27.768382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.807 [2024-04-25 17:26:27.768431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.807 [2024-04-25 17:26:27.768443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.807 [2024-04-25 17:26:27.781002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:57.807 [2024-04-25 17:26:27.781051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.807 [2024-04-25 17:26:27.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.792944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.792990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.793002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.804240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.804312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.804341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.815584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.815642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.827866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.827914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.827926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.838244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.838292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.838303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.849985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.850033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.850044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.860543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.860592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.860620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.872017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.872064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.872076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.883237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.883284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.883295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.893590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.893648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.905830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.905877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.905889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.916137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.916184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.916196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.929576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.929623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.941896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.941943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.941955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.953835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.953882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.953894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.964366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.964413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.964425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.976837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.976885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.976897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:27.988357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:27.988405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:27.988417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:28.000412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:28.000461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:28.000473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:28.012130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:28.012178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:28.012190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:28.022558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:28.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:28.022617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.066 [2024-04-25 17:26:28.034206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.066 [2024-04-25 17:26:28.034253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.066 [2024-04-25 17:26:28.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.046551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.046600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.046612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.058595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.058642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.058654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.068187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.068236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.068248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.081056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.081103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.081115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.094324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.094371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.094383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.105937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.105985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.105997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.117738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.117785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.117797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.129447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.129494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.129505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.140725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.140781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.140793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.152377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.152437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.162959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.163006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.163018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.175391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.175438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.175450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.188052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.188099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.188111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.199249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.199298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.199309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.208144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.208191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.208203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.221625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.221672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.221684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.233432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.233479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.233491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.244920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.244967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.244978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.256203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.256250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.256262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.267731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.267777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.267789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.279951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.279999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.280011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.326 [2024-04-25 17:26:28.290913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.326 [2024-04-25 17:26:28.290960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.326 [2024-04-25 17:26:28.290972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.303935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.304015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.304027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.315793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.315839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.315851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.325582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.325630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.325642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.337179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.337225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.349520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.349567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.349579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.361309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.361355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.361367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.372095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.372142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.372154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.383125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.383172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.383183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.395180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.395238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.407264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.407311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.407323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.418660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.418707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.418729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.430427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.430474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.430486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.441248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.441295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.441307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.453421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.453468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.453480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.466022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.466069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.466081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.477860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.477908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.477921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.491192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.586 [2024-04-25 17:26:28.491243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.586 [2024-04-25 17:26:28.491256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.586 [2024-04-25 17:26:28.503306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.587 [2024-04-25 17:26:28.503355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.587 [2024-04-25 17:26:28.503367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.587 [2024-04-25 17:26:28.517558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.587 [2024-04-25 17:26:28.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.587 [2024-04-25 17:26:28.517619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.587 [2024-04-25 17:26:28.528940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.587 [2024-04-25 17:26:28.528988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.587 [2024-04-25 17:26:28.529000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.587 [2024-04-25 17:26:28.540238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.587 [2024-04-25 17:26:28.540309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.587 [2024-04-25 17:26:28.540323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.587 [2024-04-25 17:26:28.552784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.587 [2024-04-25 17:26:28.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.587 [2024-04-25 17:26:28.552843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.565820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.565868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.565897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.578504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.578553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.578565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.589788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.589837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.589849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.602576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.602624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.602636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.614827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.614874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.614886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.627624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.627672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.627684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.640494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.640545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.654413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.846 [2024-04-25 17:26:28.654462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.846 [2024-04-25 17:26:28.654475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.846 [2024-04-25 17:26:28.667435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.667484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.667496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.682606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.682666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.697024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.697075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.697103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.709545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.709593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.709605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.719671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.719728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.719741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.731865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.731911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.731923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.743665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.743712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.743733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.755004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.755063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.755076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.765714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.765773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.765785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 [2024-04-25 17:26:28.777482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x146a6a0) 00:22:58.847 [2024-04-25 17:26:28.777530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.847 [2024-04-25 17:26:28.777542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.847 00:22:58.847 Latency(us) 00:22:58.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.847 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:58.847 nvme0n1 : 2.00 21476.75 83.89 0.00 0.00 5954.04 3068.28 15847.80 00:22:58.847 =================================================================================================================== 00:22:58.847 Total : 21476.75 83.89 0.00 0.00 5954.04 3068.28 15847.80 00:22:58.847 0 00:22:58.847 17:26:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:58.847 17:26:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:58.847 17:26:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:58.847 17:26:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:58.847 | .driver_specific 00:22:58.847 | .nvme_error 00:22:58.847 | .status_code 00:22:58.847 | .command_transient_transport_error' 00:22:59.106 17:26:29 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:22:59.106 17:26:29 -- host/digest.sh@73 -- # killprocess 92713 00:22:59.106 17:26:29 -- common/autotest_common.sh@936 -- # '[' -z 92713 ']' 00:22:59.106 17:26:29 -- common/autotest_common.sh@940 -- # kill -0 92713 00:22:59.106 17:26:29 -- common/autotest_common.sh@941 -- # uname 00:22:59.106 17:26:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.106 17:26:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92713 00:22:59.364 17:26:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:59.364 killing process with pid 92713 00:22:59.364 17:26:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:59.364 17:26:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92713' 00:22:59.364 Received shutdown signal, test time was about 2.000000 seconds 00:22:59.364 00:22:59.364 Latency(us) 00:22:59.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.365 =================================================================================================================== 00:22:59.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.365 17:26:29 -- common/autotest_common.sh@955 -- # kill 92713 00:22:59.365 17:26:29 -- common/autotest_common.sh@960 -- # wait 92713 00:22:59.365 17:26:29 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:59.365 17:26:29 -- host/digest.sh@54 -- # local rw bs qd 00:22:59.365 17:26:29 -- host/digest.sh@56 -- # rw=randread 00:22:59.365 17:26:29 -- host/digest.sh@56 -- # bs=131072 00:22:59.365 17:26:29 -- host/digest.sh@56 -- # qd=16 00:22:59.365 17:26:29 -- host/digest.sh@58 -- # bperfpid=92798 00:22:59.365 17:26:29 -- host/digest.sh@60 -- # waitforlisten 92798 /var/tmp/bperf.sock 00:22:59.365 17:26:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:59.365 17:26:29 -- common/autotest_common.sh@817 -- # '[' -z 92798 ']' 00:22:59.365 17:26:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:59.365 17:26:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:59.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:59.365 17:26:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:59.365 17:26:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:59.365 17:26:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:59.365 Zero copy mechanism will not be used. 00:22:59.365 [2024-04-25 17:26:29.313630] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:22:59.365 [2024-04-25 17:26:29.313743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92798 ] 00:22:59.623 [2024-04-25 17:26:29.448079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.623 [2024-04-25 17:26:29.498083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.623 17:26:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.623 17:26:29 -- common/autotest_common.sh@850 -- # return 0 00:22:59.623 17:26:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:59.623 17:26:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:59.882 17:26:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:59.882 17:26:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.882 17:26:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.882 17:26:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.882 17:26:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.882 17:26:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.141 nvme0n1 00:23:00.401 17:26:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:00.401 17:26:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.401 17:26:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.401 17:26:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.401 17:26:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:00.401 17:26:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:00.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:00.401 Zero copy mechanism will not be used. 00:23:00.401 Running I/O for 2 seconds... 00:23:00.401 [2024-04-25 17:26:30.254633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.254689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.254703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.258608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.258656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.258669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.263320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.263368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.263380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.266204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.266251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.266263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.270247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.270294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.270306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.274891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.274938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.274950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.277916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.277963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.277975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.281690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.281762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.281774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.285362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.285408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.285420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.289233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.289280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.289291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.292042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.292100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.295865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.295913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.295924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.300066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.300113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.300125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.303189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.303235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.303247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.306726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.306773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.306784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.310626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.310674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.310686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.313471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.313518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.313529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.317250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.317297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.317309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.321347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.321394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.321405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.325377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.325424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.325436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.328144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.328190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.331597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.331644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.331656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.335434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.335481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.335493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.401 [2024-04-25 17:26:30.338288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.401 [2024-04-25 17:26:30.338334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.401 [2024-04-25 17:26:30.338345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.341410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.341456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.341468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.345451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.345509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.349086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.349150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.349161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.352037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.352083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.355287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.355335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.355346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.358951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.358999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.359011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.363620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.363668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.363679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.366508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.366554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.366565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.370214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.370259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.370271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.402 [2024-04-25 17:26:30.374013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.402 [2024-04-25 17:26:30.374093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.402 [2024-04-25 17:26:30.374105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.662 [2024-04-25 17:26:30.378264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.662 [2024-04-25 17:26:30.378312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.662 [2024-04-25 17:26:30.378324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.662 [2024-04-25 17:26:30.382188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.662 [2024-04-25 17:26:30.382237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.382249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.385961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.386008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.386019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.389793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.389839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.389851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.393064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.393110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.393121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.397135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.397182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.397194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.399604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.399649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.399660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.404137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.404184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.404196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.407799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.407845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.411467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.411514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.411526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.414805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.414852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.414863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.418605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.418653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.418664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.421638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.421685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.421697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.425552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.425598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.429230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.429276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.429288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.433117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.433164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.433176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.436949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.437009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.440742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.440798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.444413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.444447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.444460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.448208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.448254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.448265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.452248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.452321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.452335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.455518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.455564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.455575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.459340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.459388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.459400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.463178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.463225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.463237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.467346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.467394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.467406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.471059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.471105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.471116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.474926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.474973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.474984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.478536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.478583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.663 [2024-04-25 17:26:30.478594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.663 [2024-04-25 17:26:30.481886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.663 [2024-04-25 17:26:30.481933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.481944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.485747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.485792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.485803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.489428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.489475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.493212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.493270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.497123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.497170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.497181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.500999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.501045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.501056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.504161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.504206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.504218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.508591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.508654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.508681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.512550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.512599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.512627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.515046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.515092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.515103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.519380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.519427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.519438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.523938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.523986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.523997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.526836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.526881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.526892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.530108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.530154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.533433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.533480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.533491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.537411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.537458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.537469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.541433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.541480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.541491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.544207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.544253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.544264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.547975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.548024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.548035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.551649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.551696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.551724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.555172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.555220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.555231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.558544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.558592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.558604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.562058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.562104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.565945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.565992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.566004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.570075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.570122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.570134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.573780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.573826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.573837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.577019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.577066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.664 [2024-04-25 17:26:30.577077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.664 [2024-04-25 17:26:30.580827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.664 [2024-04-25 17:26:30.580873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.580884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.584510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.584559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.584572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.588475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.588523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.592258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.592327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.592340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.595664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.595710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.595748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.599131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.599178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.599190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.602980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.603027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.603039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.606988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.607036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.610997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.611044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.611056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.614977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.615023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.615035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.618742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.618789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.618800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.622827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.622873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.625933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.625980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.629489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.633329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.633376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.633388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.665 [2024-04-25 17:26:30.637954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.665 [2024-04-25 17:26:30.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.665 [2024-04-25 17:26:30.638014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.642074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.642122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.642133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.646063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.646113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.646124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.649618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.649665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.649676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.653903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.653950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.653961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.656469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.656503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.656516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.660417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.660451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.660464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.665027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.665073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.665085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.669540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.926 [2024-04-25 17:26:30.669598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.926 [2024-04-25 17:26:30.672709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.926 [2024-04-25 17:26:30.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.672777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.676272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.676360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.676373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.680252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.680323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.683735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.683781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.683792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.688153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.688201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.688212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.692428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.692465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.692479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.696253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.696328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.696342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.700919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.700970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.700983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.705374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.705433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.709896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.709947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.709960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.713678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.713751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.713765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.717583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.717631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.717643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.721589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.721635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.721647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.725142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.725189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.725200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.729287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.729334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.729346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.732939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.732998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.737414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.737461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.742313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.742371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.745518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.745564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.745576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.749222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.749281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.753279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.753326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.755975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.756021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.756032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.759408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.927 [2024-04-25 17:26:30.759467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.927 [2024-04-25 17:26:30.763348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.927 [2024-04-25 17:26:30.763395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.763407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.767128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.767175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.767186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.771295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.771341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.771353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.774828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.774875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.774886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.778365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.778412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.778424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.782375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.782422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.782433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.785889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.785935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.785947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.789431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.789477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.789489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.793064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.793121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.796463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.796496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.796508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.799818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.799864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.799875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.803546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.803594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.803607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.807096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.807143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.807171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.810616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.810662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.810674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.814692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.814750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.814762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.818684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.818741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.818753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.822048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.822095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.822122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.825771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.825818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.825829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.829364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.829411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.829423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.832690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.832744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.832756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.836145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.836191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.836203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.840152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.840198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.840210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.844057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.928 [2024-04-25 17:26:30.844103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.928 [2024-04-25 17:26:30.844115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.928 [2024-04-25 17:26:30.846925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.846972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.846983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.850995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.851042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.851053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.853903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.853951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.853962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.857817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.857865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.857877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.861871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.861919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.861931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.865169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.865215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.868935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.868982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.868994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.872956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.873002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.873013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.876929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.876977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.876988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.879995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.880042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.880053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.884053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.884100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.884111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.887862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.887908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.887920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.890638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.890684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.890696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.895211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.895258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:00.929 [2024-04-25 17:26:30.899940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:00.929 [2024-04-25 17:26:30.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-04-25 17:26:30.900001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.190 [2024-04-25 17:26:30.902988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.190 [2024-04-25 17:26:30.903035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.190 [2024-04-25 17:26:30.903047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.190 [2024-04-25 17:26:30.906971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.190 [2024-04-25 17:26:30.907019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.907031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.910939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.910987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.910998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.914516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.914563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.914574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.918699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.918756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.918768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.921744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.921790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.921801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.925768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.925815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.925826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.930096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.930160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.930172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.934008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.934067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.936894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.936939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.936950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.940933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.940980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.940992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.945556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.945603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.945614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.948783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.948827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.948838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.952386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.952420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.952432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.957209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.957258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.957269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.961298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.961356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.964475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.964524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.964537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.969024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.969072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.969086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.973621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.973670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.973682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.977329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.977378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.977391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.982156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.982204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.986862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.986913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.986927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.991444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.991493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.191 [2024-04-25 17:26:30.991504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.191 [2024-04-25 17:26:30.994453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.191 [2024-04-25 17:26:30.994500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:30.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:30.998623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:30.998671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:30.998684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.002407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.002454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.002466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.006416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.006464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.006476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.010419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.010467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.010478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.013615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.017762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.021684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.021741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.021753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.025294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.025342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.025353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.029462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.029510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.029522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.033428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.033477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.033488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.036649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.036713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.036748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.040233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.040303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.040332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.044349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.044398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.047714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.047759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.047771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.051467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.051515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.051527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.055135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.055182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.059103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.059152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.059164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.063414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.063461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.063473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.067583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.067642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.070500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.070547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.074486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.074534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.074546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.192 [2024-04-25 17:26:31.078327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.192 [2024-04-25 17:26:31.078375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.192 [2024-04-25 17:26:31.078387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.082299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.082347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.082359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.085817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.085865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.085877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.090218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.090266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.090277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.093638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.093686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.093698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.097686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.097745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.097758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.102214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.102278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.102289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.105484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.105532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.105544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.109041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.109089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.109115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.113143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.113191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.113203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.115748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.115795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.115806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.119530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.119577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.119589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.123798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.123846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.123858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.128229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.128329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.130918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.130965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.130977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.135602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.135650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.140637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.140674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.140688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.144009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.144056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.144067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.148051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.148099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.148111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.151933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.151981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.151993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.154861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.154909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.154921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.159236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.193 [2024-04-25 17:26:31.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.193 [2024-04-25 17:26:31.159316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.193 [2024-04-25 17:26:31.164198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.194 [2024-04-25 17:26:31.164246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.194 [2024-04-25 17:26:31.164259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.167088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.167135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.167146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.171780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.171827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.175639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.175686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.175698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.178949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.178998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.179010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.183403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.183463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.188017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.188064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.191593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.191642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.191665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.195254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.195301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.195313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.198579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.198626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.198638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.202552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.202598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.202609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.206318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.206365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.206377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.210518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.210564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.210575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.214009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.454 [2024-04-25 17:26:31.214055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.454 [2024-04-25 17:26:31.214066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.454 [2024-04-25 17:26:31.217836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.217882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.217894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.221420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.221467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.221478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.224973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.225004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.225015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.229100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.229146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.232205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.232264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.235918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.235965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.235976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.239795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.239841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.239853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.243178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.243224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.243235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.246912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.246959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.246970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.250593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.250640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.250652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.253923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.253970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.253981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.257650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.257697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.257709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.261588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.261635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.261646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.264208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.264253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.264265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.268146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.268192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.268204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.272244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.272314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.272344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.275269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.275328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.279016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.279064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.279076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.283130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.283177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.283189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.287160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.287207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.287218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.290226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.290273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.290284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.293996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.294043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.294054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.297599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.297645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.297656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.301681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.301739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.301751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.304934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.304980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.304991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.308923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.308969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.308981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.312752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.455 [2024-04-25 17:26:31.312807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.455 [2024-04-25 17:26:31.312819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.455 [2024-04-25 17:26:31.315778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.315824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.315835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.319466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.319511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.319523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.323240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.323287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.326686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.326758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.326770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.330834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.330881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.330893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.334414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.334460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.334472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.338347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.338393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.338405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.341988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.342033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.342044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.345247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.345293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.345305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.348897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.348943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.348955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.352389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.352437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.352450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.355245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.355291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.355302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.359052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.359099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.359110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.362330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.362377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.362388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.365632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.365678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.365689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.369537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.369582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.369593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.372680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.372738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.372750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.376889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.376947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.381039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.381085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.381097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.384852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.384897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.384909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.387290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.387334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.387346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.391439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.391486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.391498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.395289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.395334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.395346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.398288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.398332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.398344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.402210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.402257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.402269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.405804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.405850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.405861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.409651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.409698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.456 [2024-04-25 17:26:31.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.456 [2024-04-25 17:26:31.412985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.456 [2024-04-25 17:26:31.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.457 [2024-04-25 17:26:31.413042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.457 [2024-04-25 17:26:31.417078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.457 [2024-04-25 17:26:31.417125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.457 [2024-04-25 17:26:31.417136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.457 [2024-04-25 17:26:31.421152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.457 [2024-04-25 17:26:31.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.457 [2024-04-25 17:26:31.421208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.457 [2024-04-25 17:26:31.424342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.457 [2024-04-25 17:26:31.424392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.457 [2024-04-25 17:26:31.424405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.457 [2024-04-25 17:26:31.428716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.457 [2024-04-25 17:26:31.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.457 [2024-04-25 17:26:31.428787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.433097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.433143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.433155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.436828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.436875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.436887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.439838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.439884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.439896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.444249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.444319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.444350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.448441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.448491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.448504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.451476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.451522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.451533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.455797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.455843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.455855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.459945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.459991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.460003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.462977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.463022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.463033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.466608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.466654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.466666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.470840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.470886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.470897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.473896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.473941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.473952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.477908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.477954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.477965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.481934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.481981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.481992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.484862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.484906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.719 [2024-04-25 17:26:31.484917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.719 [2024-04-25 17:26:31.488826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.719 [2024-04-25 17:26:31.488871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.488883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.492455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.492490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.492503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.496072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.496119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.496145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.499853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.499901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.499912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.503155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.503201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.503212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.506328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.506375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.510295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.510341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.514421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.514467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.514479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.518190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.518236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.518247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.522076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.522123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.522134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.525891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.525937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.525948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.529310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.529368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.533132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.533178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.533190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.536519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.536566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.536578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.540740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.540798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.540810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.543523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.543568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.543580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.547159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.547205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.547217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.551006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.551053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.551064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.554310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.554356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.554368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.557634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.557681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.557692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.562070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.562117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.562128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.720 [2024-04-25 17:26:31.566202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.720 [2024-04-25 17:26:31.566248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.720 [2024-04-25 17:26:31.566259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.570941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.570988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.571000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.574121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.574167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.574178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.578219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.578266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.578277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.582386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.582432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.582443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.585671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.585727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.585740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.589137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.589184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.589195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.593118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.593164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.593175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.596587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.596650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.596692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.600381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.600413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.600426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.603646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.603692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.603704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.607178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.607223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.611614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.611661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.611674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.614039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.614083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.614095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.618325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.618371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.621396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.621443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.621454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.625425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.625472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.625484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.629148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.629194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.629206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.632928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.632974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.632985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.636468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.636501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.636513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.639955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.640002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.640013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.643900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.643947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.643959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.647579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.647626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.647637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.650582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.650629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.721 [2024-04-25 17:26:31.650640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.721 [2024-04-25 17:26:31.654278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.721 [2024-04-25 17:26:31.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.654337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.657829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.657875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.657887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.661390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.661436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.661448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.665491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.665538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.665550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.668522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.668571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.668583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.672588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.672652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.672664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.676549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.676597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.676624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.679767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.679813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.679824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.683589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.683635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.683647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.686618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.686664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.686675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.722 [2024-04-25 17:26:31.690264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.722 [2024-04-25 17:26:31.690326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.722 [2024-04-25 17:26:31.690338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.693881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.693928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.693956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.698597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.698656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.701718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.701773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.701785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.706020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.706069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.706096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.710014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.710050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.710063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.983 [2024-04-25 17:26:31.714568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.983 [2024-04-25 17:26:31.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.983 [2024-04-25 17:26:31.714643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.718858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.718909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.718923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.723323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.723371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.723383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.727626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.727673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.727685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.731765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.731827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.731842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.736120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.736183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.736194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.739891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.739942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.739955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.744272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.744347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.744361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.747840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.747887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.747899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.751072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.751118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.751129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.754400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.754447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.754459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.758433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.758480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.758491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.761942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.761989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.762001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.765606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.765653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.765664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.769670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.769725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.769738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.773518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.773565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.773576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.776371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.776404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.776416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.780486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.780519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.780532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.784547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.784583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.784596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.787589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.787636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.787647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.791062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.791109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.791121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.795068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.795114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.795126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.797926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.797972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.797984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.984 [2024-04-25 17:26:31.802390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.984 [2024-04-25 17:26:31.802438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.984 [2024-04-25 17:26:31.802449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.805668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.805715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.805737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.808934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.808982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.808993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.812750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.812807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.812819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.816513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.816546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.816559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.820505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.823446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.823492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.823504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.826464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.826510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.826521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.830575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.830623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.830635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.834450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.834497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.834509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.838281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.838328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.838340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.841836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.841883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.841894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.845820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.845866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.845878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.848975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.849024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.849035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.852826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.852873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.852884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.856582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.856642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.856653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.859414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.859460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.859471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.863569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.863616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.863628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.866505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.866555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.866567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.870369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.870417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.870428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.874669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.874741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.874755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.879000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.879048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.879060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.881512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.881558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.881569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.885251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.985 [2024-04-25 17:26:31.885298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.985 [2024-04-25 17:26:31.885310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.985 [2024-04-25 17:26:31.889072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.889119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.889131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.892333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.892381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.892393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.896362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.896394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.896406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.899763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.899811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.899823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.903070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.903117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.903128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.907050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.907097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.911094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.911141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.911152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.913906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.913964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.917888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.917936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.917948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.921543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.921590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.921602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.924711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.924768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.928353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.928401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.928412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.932054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.932113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.935786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.935832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.939546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.939593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.939605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.942482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.942528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.942539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.946748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.946795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.946806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.950086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.950144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.953820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.953866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.953878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.986 [2024-04-25 17:26:31.957495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:01.986 [2024-04-25 17:26:31.957544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.986 [2024-04-25 17:26:31.957556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.961183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.961230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.961242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.964930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.964978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.964990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.969086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.969134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.971805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.971850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.971861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.976461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.976496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.976509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.980849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.980895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.980906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.983271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.983316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.983327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.987803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.987850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.987861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.991408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.991453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.991464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.994603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.994650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.247 [2024-04-25 17:26:31.994662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.247 [2024-04-25 17:26:31.998062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.247 [2024-04-25 17:26:31.998110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:31.998121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.001825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.001872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.001883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.004386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.004419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.008112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.008160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.008171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.012182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.012229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.012256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.015919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.015966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.015977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.019891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.019938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.019949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.022997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.023044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.023056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.026699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.026771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.026783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.030112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.030158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.030169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.033734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.033780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.033791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.037598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.037645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.037657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.041244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.041301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.045220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.045267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.045279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.049172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.049218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.049230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.052497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.056240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.056311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.056339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.059452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.059499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.059510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.062722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.062768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.062779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.066315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.066373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.069955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.070002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.070014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.073439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.073486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.073498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.076976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.077023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.077035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.080046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.080093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.080120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.083818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.083866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.083877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.248 [2024-04-25 17:26:32.087364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.248 [2024-04-25 17:26:32.087410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.248 [2024-04-25 17:26:32.087422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.090994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.091039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.091051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.094596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.094643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.094655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.097679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.097735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.097747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.101516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.101563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.101575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.105484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.105531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.105542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.108228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.108281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.108326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.112061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.112110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.112137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.115784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.115830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.115842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.119318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.119365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.119377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.123278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.123324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.123335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.126766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.126814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.126825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.130544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.130590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.130602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.134334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.134381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.134392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.137334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.141382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.141431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.141442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.145167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.145213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.145225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.149134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.149180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.149192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.152178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.152224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.152236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.155691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.155764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.155776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.159596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.159643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.159655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.162889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.162936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.162947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.166563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.166610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.166621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.170211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.170257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.170269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.174343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.174390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.178187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.178234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.249 [2024-04-25 17:26:32.178245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.249 [2024-04-25 17:26:32.181342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.249 [2024-04-25 17:26:32.181389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.181401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.185054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.185129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.188986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.189021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.189035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.192824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.192873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.192885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.196229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.196285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.196315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.200431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.200480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.204184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.204232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.204245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.208493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.208531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.208545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.212944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.212981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.212994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.217256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.217303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.217316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.250 [2024-04-25 17:26:32.221425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.250 [2024-04-25 17:26:32.221474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.250 [2024-04-25 17:26:32.221501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.509 [2024-04-25 17:26:32.225511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.225559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.225570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.229964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.230013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.230026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.233472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.233555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.233567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.238065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.238114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.238141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.241501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.241548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.241559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.244844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.244904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.510 [2024-04-25 17:26:32.248129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6dd40) 00:23:02.510 [2024-04-25 17:26:32.248176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.510 [2024-04-25 17:26:32.248187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.510 00:23:02.510 Latency(us) 00:23:02.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:02.510 nvme0n1 : 2.00 8304.12 1038.01 0.00 0.00 1923.23 547.37 5719.51 00:23:02.510 =================================================================================================================== 00:23:02.510 Total : 8304.12 1038.01 0.00 0.00 1923.23 547.37 5719.51 00:23:02.510 0 00:23:02.510 17:26:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:02.510 17:26:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:02.510 17:26:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:02.510 17:26:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:02.510 | .driver_specific 00:23:02.510 | .nvme_error 00:23:02.510 | .status_code 00:23:02.510 | .command_transient_transport_error' 00:23:02.769 17:26:32 -- host/digest.sh@71 -- # (( 536 > 0 )) 00:23:02.769 17:26:32 -- host/digest.sh@73 -- # killprocess 92798 00:23:02.769 17:26:32 -- common/autotest_common.sh@936 -- # '[' -z 92798 ']' 00:23:02.769 17:26:32 -- common/autotest_common.sh@940 -- # kill -0 92798 00:23:02.769 17:26:32 -- common/autotest_common.sh@941 -- # uname 00:23:02.769 17:26:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:02.769 17:26:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92798 00:23:02.769 17:26:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:02.769 17:26:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:02.769 killing process with pid 92798 00:23:02.769 17:26:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92798' 00:23:02.769 Received shutdown signal, test time was about 2.000000 seconds 00:23:02.769 00:23:02.769 Latency(us) 00:23:02.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.769 =================================================================================================================== 00:23:02.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.769 17:26:32 -- common/autotest_common.sh@955 -- # kill 92798 00:23:02.769 17:26:32 -- common/autotest_common.sh@960 -- # wait 92798 00:23:02.769 17:26:32 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:02.769 17:26:32 -- host/digest.sh@54 -- # local rw bs qd 00:23:02.769 17:26:32 -- host/digest.sh@56 -- # rw=randwrite 00:23:02.769 17:26:32 -- host/digest.sh@56 -- # bs=4096 00:23:02.769 17:26:32 -- host/digest.sh@56 -- # qd=128 00:23:02.769 17:26:32 -- host/digest.sh@58 -- # bperfpid=92869 00:23:02.769 17:26:32 -- host/digest.sh@60 -- # waitforlisten 92869 /var/tmp/bperf.sock 00:23:02.769 17:26:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:02.769 17:26:32 -- common/autotest_common.sh@817 -- # '[' -z 92869 ']' 00:23:02.769 17:26:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.769 17:26:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:02.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.769 17:26:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.769 17:26:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:02.769 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:23:03.028 [2024-04-25 17:26:32.778125] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:03.028 [2024-04-25 17:26:32.778222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92869 ] 00:23:03.028 [2024-04-25 17:26:32.915851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.028 [2024-04-25 17:26:32.967442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.965 17:26:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:03.965 17:26:33 -- common/autotest_common.sh@850 -- # return 0 00:23:03.965 17:26:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:03.965 17:26:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.223 17:26:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:04.223 17:26:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.223 17:26:33 -- common/autotest_common.sh@10 -- # set +x 00:23:04.223 17:26:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.223 17:26:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.223 17:26:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.482 nvme0n1 00:23:04.482 17:26:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:04.482 17:26:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.482 17:26:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.482 17:26:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.482 17:26:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:04.482 17:26:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:04.482 Running I/O for 2 seconds... 00:23:04.482 [2024-04-25 17:26:34.423026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6458 00:23:04.482 [2024-04-25 17:26:34.424129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.482 [2024-04-25 17:26:34.424184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:04.482 [2024-04-25 17:26:34.435710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e5658 00:23:04.482 [2024-04-25 17:26:34.437529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.482 [2024-04-25 17:26:34.437579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:04.482 [2024-04-25 17:26:34.446523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ecc78 00:23:04.482 [2024-04-25 17:26:34.448352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.482 [2024-04-25 17:26:34.448389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:04.482 [2024-04-25 17:26:34.454041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0788 00:23:04.482 [2024-04-25 17:26:34.455034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.482 [2024-04-25 17:26:34.455082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.465439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e88f8 00:23:04.741 [2024-04-25 17:26:34.466397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.466445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.475818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e12d8 00:23:04.741 [2024-04-25 17:26:34.476527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.476564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.487255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e0a68 00:23:04.741 [2024-04-25 17:26:34.488762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.488814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.497161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190de470 00:23:04.741 [2024-04-25 17:26:34.498395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.498442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.507274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190de8a8 00:23:04.741 [2024-04-25 17:26:34.508719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.508774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.516881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f57b0 00:23:04.741 [2024-04-25 17:26:34.517941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.517989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.526771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e0630 00:23:04.741 [2024-04-25 17:26:34.527937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.527984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.537213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1ca0 00:23:04.741 [2024-04-25 17:26:34.538335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.538380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.548657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f3e60 00:23:04.741 [2024-04-25 17:26:34.550204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.550251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.559140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190df988 00:23:04.741 [2024-04-25 17:26:34.560926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.560973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.566473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1b48 00:23:04.741 [2024-04-25 17:26:34.567305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.567351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.578519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f35f0 00:23:04.741 [2024-04-25 17:26:34.579961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.580009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.588497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190edd58 00:23:04.741 [2024-04-25 17:26:34.589560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.589606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.598405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e23b8 00:23:04.741 [2024-04-25 17:26:34.599581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.599627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.608131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f3a28 00:23:04.741 [2024-04-25 17:26:34.609372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.609419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.618035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e7c50 00:23:04.741 [2024-04-25 17:26:34.619008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.619055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.627823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eea00 00:23:04.741 [2024-04-25 17:26:34.628772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.628828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.641545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f81e0 00:23:04.741 [2024-04-25 17:26:34.643233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.643279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.651956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebfd0 00:23:04.741 [2024-04-25 17:26:34.653821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.653870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.662102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190dfdc0 00:23:04.741 [2024-04-25 17:26:34.662927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.662967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.675158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc128 00:23:04.741 [2024-04-25 17:26:34.676729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.676783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.685397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0350 00:23:04.741 [2024-04-25 17:26:34.686470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.695639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190de038 00:23:04.741 [2024-04-25 17:26:34.696804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.696849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.705557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fef90 00:23:04.741 [2024-04-25 17:26:34.706448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.741 [2024-04-25 17:26:34.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:04.741 [2024-04-25 17:26:34.717335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e4de8 00:23:05.000 [2024-04-25 17:26:34.718160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.000 [2024-04-25 17:26:34.718208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:05.000 [2024-04-25 17:26:34.731176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190df550 00:23:05.000 [2024-04-25 17:26:34.733244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.000 [2024-04-25 17:26:34.733293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:05.000 [2024-04-25 17:26:34.738973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:05.000 [2024-04-25 17:26:34.740026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.000 [2024-04-25 17:26:34.740071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.751525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f81e0 00:23:05.001 [2024-04-25 17:26:34.753261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.753307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.762930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc998 00:23:05.001 [2024-04-25 17:26:34.764839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.764887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.772900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190feb58 00:23:05.001 [2024-04-25 17:26:34.774084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.774119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.785016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:05.001 [2024-04-25 17:26:34.785998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.786047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.798606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fcdd0 00:23:05.001 [2024-04-25 17:26:34.800057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.800105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.808689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ddc00 00:23:05.001 [2024-04-25 17:26:34.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.809929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.820016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e95a0 00:23:05.001 [2024-04-25 17:26:34.821315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.821362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.830413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc560 00:23:05.001 [2024-04-25 17:26:34.831756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.831826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.840392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fdeb0 00:23:05.001 [2024-04-25 17:26:34.841455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.841500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.852250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eea00 00:23:05.001 [2024-04-25 17:26:34.853790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.853836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.862720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f31b8 00:23:05.001 [2024-04-25 17:26:34.863775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.863834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.872467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fbcf0 00:23:05.001 [2024-04-25 17:26:34.874484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.874532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.884467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1430 00:23:05.001 [2024-04-25 17:26:34.885652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.885698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.894551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f3a28 00:23:05.001 [2024-04-25 17:26:34.895494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.895540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.905168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f96f8 00:23:05.001 [2024-04-25 17:26:34.906298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.906343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.915220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e49b0 00:23:05.001 [2024-04-25 17:26:34.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.916470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.927062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee190 00:23:05.001 [2024-04-25 17:26:34.928915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.928960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.934480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f4f40 00:23:05.001 [2024-04-25 17:26:34.935412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.935456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.946439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0bc0 00:23:05.001 [2024-04-25 17:26:34.947994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.948044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.956597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6890 00:23:05.001 [2024-04-25 17:26:34.958111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.958157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.965059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1868 00:23:05.001 [2024-04-25 17:26:34.965744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.965800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:05.001 [2024-04-25 17:26:34.975523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee5c8 00:23:05.001 [2024-04-25 17:26:34.976851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.001 [2024-04-25 17:26:34.976896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:34.988439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e5220 00:23:05.260 [2024-04-25 17:26:34.990146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:34.990190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:34.995728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc560 00:23:05.260 [2024-04-25 17:26:34.996607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:34.996667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.007674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee190 00:23:05.260 [2024-04-25 17:26:35.009026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.009071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.017564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e5a90 00:23:05.260 [2024-04-25 17:26:35.018704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.018770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.027156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1f80 00:23:05.260 [2024-04-25 17:26:35.028215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.028258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.036562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ec408 00:23:05.260 [2024-04-25 17:26:35.037482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.037527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.048323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fda78 00:23:05.260 [2024-04-25 17:26:35.049393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.049439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.057882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6890 00:23:05.260 [2024-04-25 17:26:35.058809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.260 [2024-04-25 17:26:35.058863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:05.260 [2024-04-25 17:26:35.069684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1b48 00:23:05.261 [2024-04-25 17:26:35.071482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.071527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.077128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e9e10 00:23:05.261 [2024-04-25 17:26:35.078200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.089152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fa3a0 00:23:05.261 [2024-04-25 17:26:35.090705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.090755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.096444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e88f8 00:23:05.261 [2024-04-25 17:26:35.097301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.097345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.106925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc128 00:23:05.261 [2024-04-25 17:26:35.107814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.107867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.118828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fac10 00:23:05.261 [2024-04-25 17:26:35.120326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.120361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.127835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6890 00:23:05.261 [2024-04-25 17:26:35.129609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.129655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.138886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e4140 00:23:05.261 [2024-04-25 17:26:35.139884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.139931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.148707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f96f8 00:23:05.261 [2024-04-25 17:26:35.149857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.149903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.158746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eea00 00:23:05.261 [2024-04-25 17:26:35.159995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.160042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.171275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fb048 00:23:05.261 [2024-04-25 17:26:35.173222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.173268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.179039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f5378 00:23:05.261 [2024-04-25 17:26:35.180011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.180055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.190967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f31b8 00:23:05.261 [2024-04-25 17:26:35.192541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.192611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.200444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f4298 00:23:05.261 [2024-04-25 17:26:35.201772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.201828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.210362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e5658 00:23:05.261 [2024-04-25 17:26:35.211630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.211675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.223056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f7da8 00:23:05.261 [2024-04-25 17:26:35.225116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.225163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.261 [2024-04-25 17:26:35.230792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebfd0 00:23:05.261 [2024-04-25 17:26:35.231849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.261 [2024-04-25 17:26:35.231900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.242566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:05.520 [2024-04-25 17:26:35.243654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.243728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.254959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e0a68 00:23:05.520 [2024-04-25 17:26:35.256512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.256565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.264790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:05.520 [2024-04-25 17:26:35.266088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.266135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.275097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fda78 00:23:05.520 [2024-04-25 17:26:35.276523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.286103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ed4e8 00:23:05.520 [2024-04-25 17:26:35.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.287112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.296773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ec408 00:23:05.520 [2024-04-25 17:26:35.297971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.298016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.306337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e49b0 00:23:05.520 [2024-04-25 17:26:35.307500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.307546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.318438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fb8b8 00:23:05.520 [2024-04-25 17:26:35.320220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.520 [2024-04-25 17:26:35.325807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f4b08 00:23:05.520 [2024-04-25 17:26:35.326573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.520 [2024-04-25 17:26:35.326617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.338423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e73e0 00:23:05.521 [2024-04-25 17:26:35.340044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.340091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.345502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e3d08 00:23:05.521 [2024-04-25 17:26:35.346276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.346323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.355774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e23b8 00:23:05.521 [2024-04-25 17:26:35.356566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.356633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.367519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eb760 00:23:05.521 [2024-04-25 17:26:35.368947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.368995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.376972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1430 00:23:05.521 [2024-04-25 17:26:35.378051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.378098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.386794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1710 00:23:05.521 [2024-04-25 17:26:35.387880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.387925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.398631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ff3c8 00:23:05.521 [2024-04-25 17:26:35.400443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.400492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.408976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fac10 00:23:05.521 [2024-04-25 17:26:35.410611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.410656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.419026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6cc8 00:23:05.521 [2024-04-25 17:26:35.420583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.420658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.428954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f35f0 00:23:05.521 [2024-04-25 17:26:35.430485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.430530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.438770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:05.521 [2024-04-25 17:26:35.439999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.440047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.449051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190feb58 00:23:05.521 [2024-04-25 17:26:35.450313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.450358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.459500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e9e10 00:23:05.521 [2024-04-25 17:26:35.461012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.461061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.469694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eea00 00:23:05.521 [2024-04-25 17:26:35.471095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.471140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.479385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e12d8 00:23:05.521 [2024-04-25 17:26:35.480736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.480789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:05.521 [2024-04-25 17:26:35.488999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e7818 00:23:05.521 [2024-04-25 17:26:35.490142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.521 [2024-04-25 17:26:35.490187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.499569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190edd58 00:23:05.817 [2024-04-25 17:26:35.500958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.501007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.515105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fdeb0 00:23:05.817 [2024-04-25 17:26:35.517254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.517316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.523987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6738 00:23:05.817 [2024-04-25 17:26:35.524982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.534993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f7538 00:23:05.817 [2024-04-25 17:26:35.536020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.536066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.547055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e7c50 00:23:05.817 [2024-04-25 17:26:35.548757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.548810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.554419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc128 00:23:05.817 [2024-04-25 17:26:35.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.555238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.564744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eb760 00:23:05.817 [2024-04-25 17:26:35.565501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.817 [2024-04-25 17:26:35.565546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:05.817 [2024-04-25 17:26:35.576615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f92c0 00:23:05.817 [2024-04-25 17:26:35.577573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.577619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.586252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebfd0 00:23:05.818 [2024-04-25 17:26:35.587073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.587151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.595267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fa7d8 00:23:05.818 [2024-04-25 17:26:35.596198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.596256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.607103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e8d30 00:23:05.818 [2024-04-25 17:26:35.608664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.608733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.616569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1b48 00:23:05.818 [2024-04-25 17:26:35.617796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.617867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.626463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1430 00:23:05.818 [2024-04-25 17:26:35.627578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.627624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.638477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1ca0 00:23:05.818 [2024-04-25 17:26:35.640354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.640402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.645890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc128 00:23:05.818 [2024-04-25 17:26:35.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.646917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.657929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebfd0 00:23:05.818 [2024-04-25 17:26:35.659489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.668138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ef6a8 00:23:05.818 [2024-04-25 17:26:35.669726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.669782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.676345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e49b0 00:23:05.818 [2024-04-25 17:26:35.677401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.677446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.687029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e5ec8 00:23:05.818 [2024-04-25 17:26:35.688141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.688185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.696568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eaef0 00:23:05.818 [2024-04-25 17:26:35.697606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.697650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.706243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc998 00:23:05.818 [2024-04-25 17:26:35.707071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.707132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.718779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f8618 00:23:05.818 [2024-04-25 17:26:35.720484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.720533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.726940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f8a50 00:23:05.818 [2024-04-25 17:26:35.728064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.728108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.736513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc560 00:23:05.818 [2024-04-25 17:26:35.737552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.746226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f57b0 00:23:05.818 [2024-04-25 17:26:35.747090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.747151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.760490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f8618 00:23:05.818 [2024-04-25 17:26:35.762360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.762405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.771670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0bc0 00:23:05.818 [2024-04-25 17:26:35.773501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.773550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:05.818 [2024-04-25 17:26:35.781560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e4de8 00:23:05.818 [2024-04-25 17:26:35.782490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:05.818 [2024-04-25 17:26:35.782539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.796664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fe2e8 00:23:06.077 [2024-04-25 17:26:35.798505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.798554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.809303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190df550 00:23:06.077 [2024-04-25 17:26:35.811011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.811059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.819673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0bc0 00:23:06.077 [2024-04-25 17:26:35.821234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.821281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.830563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e95a0 00:23:06.077 [2024-04-25 17:26:35.831964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.832012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.840563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebb98 00:23:06.077 [2024-04-25 17:26:35.841542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.841587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.850516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1868 00:23:06.077 [2024-04-25 17:26:35.851606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.851650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.860966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f96f8 00:23:06.077 [2024-04-25 17:26:35.861573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.861652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.873203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc998 00:23:06.077 [2024-04-25 17:26:35.874667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.874737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.883966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e49b0 00:23:06.077 [2024-04-25 17:26:35.885328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.885375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.895459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0ff8 00:23:06.077 [2024-04-25 17:26:35.897012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.897077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.909524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6b70 00:23:06.077 [2024-04-25 17:26:35.911584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.911631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.917766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e3d08 00:23:06.077 [2024-04-25 17:26:35.918816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.918871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.931029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e2c28 00:23:06.077 [2024-04-25 17:26:35.932564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.932630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.941364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e8d30 00:23:06.077 [2024-04-25 17:26:35.942711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.942764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.952094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e2c28 00:23:06.077 [2024-04-25 17:26:35.953614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.953661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.962008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190de038 00:23:06.077 [2024-04-25 17:26:35.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.963279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.972601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eaab8 00:23:06.077 [2024-04-25 17:26:35.973854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.973900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.983395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6300 00:23:06.077 [2024-04-25 17:26:35.984171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.077 [2024-04-25 17:26:35.984218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.077 [2024-04-25 17:26:35.995973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e9168 00:23:06.077 [2024-04-25 17:26:35.997128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:35.997179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:06.078 [2024-04-25 17:26:36.007114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e3060 00:23:06.078 [2024-04-25 17:26:36.008069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:36.008116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.078 [2024-04-25 17:26:36.019985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e23b8 00:23:06.078 [2024-04-25 17:26:36.021795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:36.021851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.078 [2024-04-25 17:26:36.027899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e88f8 00:23:06.078 [2024-04-25 17:26:36.028725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:36.028796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:06.078 [2024-04-25 17:26:36.040862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ebfd0 00:23:06.078 [2024-04-25 17:26:36.042311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:36.042358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:06.078 [2024-04-25 17:26:36.052154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f7da8 00:23:06.078 [2024-04-25 17:26:36.053875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.078 [2024-04-25 17:26:36.053923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:06.336 [2024-04-25 17:26:36.063806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ed920 00:23:06.336 [2024-04-25 17:26:36.065410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.336 [2024-04-25 17:26:36.065456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.336 [2024-04-25 17:26:36.074074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f5be8 00:23:06.336 [2024-04-25 17:26:36.075499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.336 [2024-04-25 17:26:36.075546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:06.336 [2024-04-25 17:26:36.083873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190df988 00:23:06.336 [2024-04-25 17:26:36.084936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.336 [2024-04-25 17:26:36.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.336 [2024-04-25 17:26:36.094143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f96f8 00:23:06.336 [2024-04-25 17:26:36.095109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.336 [2024-04-25 17:26:36.095156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:06.336 [2024-04-25 17:26:36.106178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0350 00:23:06.336 [2024-04-25 17:26:36.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.107762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.116482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fb8b8 00:23:06.337 [2024-04-25 17:26:36.118042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.118089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.126178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ed920 00:23:06.337 [2024-04-25 17:26:36.127508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.127554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.135892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fc128 00:23:06.337 [2024-04-25 17:26:36.137295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.137343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.145670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f1868 00:23:06.337 [2024-04-25 17:26:36.146762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.146815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.157313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e99d8 00:23:06.337 [2024-04-25 17:26:36.158904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.158950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.164575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fe720 00:23:06.337 [2024-04-25 17:26:36.165383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.165428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.176674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ed920 00:23:06.337 [2024-04-25 17:26:36.177926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.177974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.186779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e99d8 00:23:06.337 [2024-04-25 17:26:36.187715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.187770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.196341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e1b48 00:23:06.337 [2024-04-25 17:26:36.197209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.197254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.208254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e4140 00:23:06.337 [2024-04-25 17:26:36.210057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.210104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.215765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190fd640 00:23:06.337 [2024-04-25 17:26:36.216743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.216795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.227733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190de038 00:23:06.337 [2024-04-25 17:26:36.229117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.229163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.237318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f35f0 00:23:06.337 [2024-04-25 17:26:36.238535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.238581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.246665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f5be8 00:23:06.337 [2024-04-25 17:26:36.247715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.247770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.256424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f46d0 00:23:06.337 [2024-04-25 17:26:36.257574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.257618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.266529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190df118 00:23:06.337 [2024-04-25 17:26:36.267185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.276736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f0350 00:23:06.337 [2024-04-25 17:26:36.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.277779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.286157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f81e0 00:23:06.337 [2024-04-25 17:26:36.287012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.287057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.297983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f8a50 00:23:06.337 [2024-04-25 17:26:36.298997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.299026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.337 [2024-04-25 17:26:36.307766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f6890 00:23:06.337 [2024-04-25 17:26:36.308702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.337 [2024-04-25 17:26:36.308757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.319425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e95a0 00:23:06.596 [2024-04-25 17:26:36.320469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.320519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.329181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e23b8 00:23:06.596 [2024-04-25 17:26:36.330067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.330114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.338838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f2948 00:23:06.596 [2024-04-25 17:26:36.339518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.339564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.351256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee5c8 00:23:06.596 [2024-04-25 17:26:36.353119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.353166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.358547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e8d30 00:23:06.596 [2024-04-25 17:26:36.359535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.359611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.370439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190e6300 00:23:06.596 [2024-04-25 17:26:36.372034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.372083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.381479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190eea00 00:23:06.596 [2024-04-25 17:26:36.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.382933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.391638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190f5378 00:23:06.596 [2024-04-25 17:26:36.392855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.392902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.402957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee5c8 00:23:06.596 [2024-04-25 17:26:36.404728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.404780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.596 [2024-04-25 17:26:36.410232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82690) with pdu=0x2000190ee5c8 00:23:06.596 [2024-04-25 17:26:36.411141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.596 [2024-04-25 17:26:36.411186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:06.596 00:23:06.596 Latency(us) 00:23:06.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.596 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:06.596 nvme0n1 : 2.01 24227.74 94.64 0.00 0.00 5276.52 2055.45 17277.67 00:23:06.596 =================================================================================================================== 00:23:06.596 Total : 24227.74 94.64 0.00 0.00 5276.52 2055.45 17277.67 00:23:06.596 0 00:23:06.596 17:26:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:06.596 17:26:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:06.596 17:26:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:06.596 | .driver_specific 00:23:06.596 | .nvme_error 00:23:06.596 | .status_code 00:23:06.596 | .command_transient_transport_error' 00:23:06.596 17:26:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:06.855 17:26:36 -- host/digest.sh@71 -- # (( 190 > 0 )) 00:23:06.855 17:26:36 -- host/digest.sh@73 -- # killprocess 92869 00:23:06.855 17:26:36 -- common/autotest_common.sh@936 -- # '[' -z 92869 ']' 00:23:06.855 17:26:36 -- common/autotest_common.sh@940 -- # kill -0 92869 00:23:06.855 17:26:36 -- common/autotest_common.sh@941 -- # uname 00:23:06.855 17:26:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:06.855 17:26:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92869 00:23:06.855 17:26:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:06.855 killing process with pid 92869 00:23:06.855 17:26:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:06.855 17:26:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92869' 00:23:06.855 Received shutdown signal, test time was about 2.000000 seconds 00:23:06.855 00:23:06.855 Latency(us) 00:23:06.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.855 =================================================================================================================== 00:23:06.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.855 17:26:36 -- common/autotest_common.sh@955 -- # kill 92869 00:23:06.855 17:26:36 -- common/autotest_common.sh@960 -- # wait 92869 00:23:07.114 17:26:36 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:07.114 17:26:36 -- host/digest.sh@54 -- # local rw bs qd 00:23:07.114 17:26:36 -- host/digest.sh@56 -- # rw=randwrite 00:23:07.114 17:26:36 -- host/digest.sh@56 -- # bs=131072 00:23:07.114 17:26:36 -- host/digest.sh@56 -- # qd=16 00:23:07.114 17:26:36 -- host/digest.sh@58 -- # bperfpid=92960 00:23:07.114 17:26:36 -- host/digest.sh@60 -- # waitforlisten 92960 /var/tmp/bperf.sock 00:23:07.114 17:26:36 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:07.114 17:26:36 -- common/autotest_common.sh@817 -- # '[' -z 92960 ']' 00:23:07.114 17:26:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.114 17:26:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.114 17:26:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.114 17:26:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.114 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:23:07.114 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:07.114 Zero copy mechanism will not be used. 00:23:07.114 [2024-04-25 17:26:36.939542] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:07.114 [2024-04-25 17:26:36.939655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92960 ] 00:23:07.114 [2024-04-25 17:26:37.070066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.373 [2024-04-25 17:26:37.125765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.939 17:26:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.939 17:26:37 -- common/autotest_common.sh@850 -- # return 0 00:23:07.939 17:26:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:07.939 17:26:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.196 17:26:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:08.196 17:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.196 17:26:38 -- common/autotest_common.sh@10 -- # set +x 00:23:08.196 17:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.196 17:26:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.196 17:26:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.454 nvme0n1 00:23:08.454 17:26:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:08.454 17:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.454 17:26:38 -- common/autotest_common.sh@10 -- # set +x 00:23:08.454 17:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.454 17:26:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:08.454 17:26:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.713 Zero copy mechanism will not be used. 00:23:08.713 Running I/O for 2 seconds... 00:23:08.713 [2024-04-25 17:26:38.483730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.713 [2024-04-25 17:26:38.484095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.713 [2024-04-25 17:26:38.484140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.713 [2024-04-25 17:26:38.488906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.713 [2024-04-25 17:26:38.489205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.713 [2024-04-25 17:26:38.489245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.713 [2024-04-25 17:26:38.493820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.494125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.494176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.498817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.499122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.499170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.503766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.504099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.504133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.508714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.509063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.513475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.513830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.513861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.518417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.518771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.518833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.523350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.523678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.523719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.528200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.528553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.528588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.533250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.533595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.533628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.538215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.538536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.538567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.543066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.543414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.543472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.548189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.548553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.548587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.553180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.553503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.553538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.558150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.558471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.558506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.562973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.563314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.563349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.567959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.568272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.568339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.572946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.573284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.573318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.577690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.578051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.578097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.582454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.582799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.582825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.587455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.587807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.587851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.592380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.592747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.592790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.597308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.597653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.602279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.602609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.602643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.607253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.607602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.607637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.612228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.612578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.612612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.714 [2024-04-25 17:26:38.617212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.714 [2024-04-25 17:26:38.617544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.714 [2024-04-25 17:26:38.617585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.622152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.622466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.622497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.626869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.627193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.627227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.631639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.631975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.632007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.636401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.636806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.636849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.641334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.641661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.641698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.646097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.646427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.646458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.650814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.651139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.651177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.655455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.655778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.655816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.660257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.660610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.660643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.664999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.665338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.669574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.669912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.669943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.674289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.674613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.674647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.679121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.679429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.679486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.683780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.684088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.684135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.715 [2024-04-25 17:26:38.689150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.715 [2024-04-25 17:26:38.689452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.715 [2024-04-25 17:26:38.689499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.694277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.694652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.694716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.699352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.699687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.699731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.704244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.704694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.709149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.709465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.709496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.713889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.714226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.714257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.718794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.719144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.719188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.723550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.723891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.723926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.728428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.728782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.728829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.733081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.975 [2024-04-25 17:26:38.733389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.975 [2024-04-25 17:26:38.733440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.975 [2024-04-25 17:26:38.737836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.738134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.738177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.742469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.742794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.742832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.747197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.747522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.747556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.751927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.752243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.752274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.756664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.757018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.757050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.761411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.761725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.761760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.766160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.766475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.766506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.770890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.771228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.771262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.775613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.775943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.775976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.780455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.780855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.785309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.785608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.785654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.790123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.790448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.790487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.794861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.795193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.795226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.799664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.800006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.800045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.804495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.804927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.809190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.809496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.809543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.813856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.814180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.814213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.818710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.819057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.819112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.823486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.823811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.828174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.828512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.828545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.832799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.833144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.833176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.837494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.837818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.837853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.842441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.842812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.847800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.848185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.848219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.853265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.853650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.853690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.859153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.859457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.859501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.976 [2024-04-25 17:26:38.864558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.976 [2024-04-25 17:26:38.864949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.976 [2024-04-25 17:26:38.864989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.870105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.870396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.870475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.875333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.875640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.875687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.880550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.880954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.880993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.885860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.886236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.886273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.891055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.891363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.891413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.895713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.896060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.896098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.900396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.900723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.900767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.904955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.905299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.905330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.909572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.909893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.914363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.914691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.914737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.919164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.919491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.919522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.923899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.924214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.928548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.928903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.928938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.933345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.933673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.933718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.938175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.938489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.938519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.943027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.943356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.943387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.977 [2024-04-25 17:26:38.947796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:08.977 [2024-04-25 17:26:38.948149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.977 [2024-04-25 17:26:38.948209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.953020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.953385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.953430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.958000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.958370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.958408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.962755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.963071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.963102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.967420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.967744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.967784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.972182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.972529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.972563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.976847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.977192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.977223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.981592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.981911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.981958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.986407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.986719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.986756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.991331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.991658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.237 [2024-04-25 17:26:38.996086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.237 [2024-04-25 17:26:38.996415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.237 [2024-04-25 17:26:38.996446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.000922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.001255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.005707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.006035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.006069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.010436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.010751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.010793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.015180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.015492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.015528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.020013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.020336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.024722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.025041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.025075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.029426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.029750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.029772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.034055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.034380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.034428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.038793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.039098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.039145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.043542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.043879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.048232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.048549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.048580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.052998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.053329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.053362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.057641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.057989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.058032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.062307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.062614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.067056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.067370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.067417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.071690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.072023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.072063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.076212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.076539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.076573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.081073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.081401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.081431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.085804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.086128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.086162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.090598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.090922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.090954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.095394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.095707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.095747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.100094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.100422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.100453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.104828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.105129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.109517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.238 [2024-04-25 17:26:39.109826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.238 [2024-04-25 17:26:39.109872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.238 [2024-04-25 17:26:39.114305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.114618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.114648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.119043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.119377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.119414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.123650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.123983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.124016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.128314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.128617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.128653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.133121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.133455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.133497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.138003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.138333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.138365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.142620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.142955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.142986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.147378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.147693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.147733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.152203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.152535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.156928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.157260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.157293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.161653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.161997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.162031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.166389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.166705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.166744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.171207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.171520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.171550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.175850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.176187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.176219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.180547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.180887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.180918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.185350] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.185663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.185693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.190128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.190425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.190472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.194893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.195216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.195248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.199637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.200000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.200042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.204249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.204588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.204620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.239 [2024-04-25 17:26:39.208903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.239 [2024-04-25 17:26:39.209244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.239 [2024-04-25 17:26:39.209278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.214207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.214567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.214607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.219410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.219800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.219855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.224360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.224781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.224828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.229249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.229580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.229612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.234178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.234499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.234535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.239224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.239558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.239591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.244097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.244443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.244471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.248778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.249081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.249129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.253466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.499 [2024-04-25 17:26:39.253791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.499 [2024-04-25 17:26:39.253828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.499 [2024-04-25 17:26:39.258250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.258616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.263068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.263413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.263456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.267813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.268138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.268172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.272435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.272799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.272850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.277145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.277449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.277497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.281895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.282199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.282247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.286479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.286816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.286852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.291254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.291559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.291608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.295832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.296139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.296195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.300688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.301014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.305481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.305807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.305830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.310284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.310596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.310627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.315007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.315348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.315382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.319683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.320033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.320070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.324438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.324803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.324849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.329111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.329448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.329480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.333965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.334262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.334320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.338729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.339046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.339079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.343442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.343747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.343805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.348200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.348535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.348566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.353061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.353373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.357763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.358089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.358120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.362516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.362845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.362884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.367325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.367638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.367668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.372085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.372414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.372456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.376808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.377121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.500 [2024-04-25 17:26:39.377151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.500 [2024-04-25 17:26:39.381589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.500 [2024-04-25 17:26:39.381926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.386336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.386673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.391085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.391406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.391456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.395803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.396108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.396155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.400472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.400857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.400888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.405266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.405591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.409923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.410231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.414694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.415035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.415067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.419332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.419675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.419728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.424148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.424488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.424522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.428985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.429301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.429332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.433776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.434088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.434120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.438434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.438738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.438777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.443231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.443535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.443583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.447817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.448141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.448182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.452494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.452883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.452917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.457261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.457557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.457604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.462039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.462331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.462379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.466659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.466983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.467016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.501 [2024-04-25 17:26:39.471498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.501 [2024-04-25 17:26:39.471869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.501 [2024-04-25 17:26:39.471901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.476913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.477283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.477326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.481948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.482301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.482345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.486627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.486971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.491471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.491808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.491851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.496538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.496903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.496954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.501407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.501722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.501744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.506181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.506488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.506536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.512498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.512849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.512874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.518611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.519007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.519045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.525154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.525451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.761 [2024-04-25 17:26:39.525484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.761 [2024-04-25 17:26:39.530914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.761 [2024-04-25 17:26:39.531211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.531244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.536688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.537064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.537096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.542512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.542835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.547418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.547733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.552107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.552443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.552477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.556835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.557140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.557192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.561490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.561824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.561847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.566172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.566494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.566528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.570867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.571159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.571207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.575538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.575847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.580333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.580674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.580714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.585045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.589603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.589954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.589996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.594338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.594660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.594694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.599062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.599391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.599426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.603763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.604084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.604124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.608441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.608780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.608821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.613586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.613961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.613996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.618918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.619278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.619321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.624051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.624409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.624444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.629538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.629902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.629936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.634822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.635218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.635266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.640245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.640582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.640651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.762 [2024-04-25 17:26:39.645625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.762 [2024-04-25 17:26:39.645994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.762 [2024-04-25 17:26:39.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.650635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.650998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.651050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.655556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.655903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.655938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.660699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.661083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.661134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.665694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.666081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.666119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.670692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.671034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.671066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.675512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.675868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.675900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.680318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.680668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.680714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.685018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.685363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.690047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.690408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.690454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.694903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.695215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.695247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.699750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.700081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.700115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.704416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.704753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.704810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.709386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.709730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.709775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.714376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.714709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.714761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.719334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.719677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.719748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.724159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.724535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.724569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.729314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.729650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:09.763 [2024-04-25 17:26:39.734276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:09.763 [2024-04-25 17:26:39.734644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.763 [2024-04-25 17:26:39.734683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.739886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.740221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.740253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.745036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.745384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.745417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.749957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.750288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.750319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.754786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.755117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.755151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.759647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.759990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.760023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.764676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.765033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.765071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.769629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.769967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.770000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.774353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.774682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.774727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.779311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.779657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.779715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.784347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.784693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.789484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.789822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.789863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.794401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.794744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.794785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.799299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.799638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.799699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.804386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.804726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.804781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.809298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.809641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.809684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.814132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.814461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.814495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.819061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.819424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.819471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.824123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.824472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.824505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.829115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.829446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.829479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.833872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.023 [2024-04-25 17:26:39.834202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.023 [2024-04-25 17:26:39.834235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.023 [2024-04-25 17:26:39.838721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.839084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.839131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.843755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.844085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.844117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.848632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.849005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.849042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.853365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.853694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.853741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.858301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.858647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.858689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.863730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.864087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.864127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.869214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.869544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.869576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.874702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.875090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.875140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.880440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.880881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.880917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.885754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.886121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.886172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.891035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.891368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.891417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.896335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.896672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.896756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.901600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.901987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.902024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.906684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.907026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.907083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.911493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.911829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.911862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.916146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.916502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.916536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.921147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.921470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.921504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.925958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.926282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.926317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.930586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.930902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.930945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.935289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.935626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.935658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.940091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.940460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.940492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.944918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.945230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.945261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.949721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.950045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.950075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.954379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.954701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.954724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.959100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.959421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.963862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.964181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.964213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.968492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.024 [2024-04-25 17:26:39.968854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.024 [2024-04-25 17:26:39.968890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.024 [2024-04-25 17:26:39.973094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.973415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-04-25 17:26:39.977687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.978022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.978053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.025 [2024-04-25 17:26:39.982467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.982792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.982830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.025 [2024-04-25 17:26:39.987279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.987593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.987630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.025 [2024-04-25 17:26:39.992060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.992419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.025 [2024-04-25 17:26:39.997033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.025 [2024-04-25 17:26:39.997402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.025 [2024-04-25 17:26:39.997440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.284 [2024-04-25 17:26:40.002625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.284 [2024-04-25 17:26:40.002970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.284 [2024-04-25 17:26:40.003005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.284 [2024-04-25 17:26:40.008117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.284 [2024-04-25 17:26:40.008439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.284 [2024-04-25 17:26:40.008478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.284 [2024-04-25 17:26:40.013141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.284 [2024-04-25 17:26:40.013478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.284 [2024-04-25 17:26:40.013512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.284 [2024-04-25 17:26:40.018093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.284 [2024-04-25 17:26:40.018458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.284 [2024-04-25 17:26:40.018498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.284 [2024-04-25 17:26:40.024125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.284 [2024-04-25 17:26:40.024464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.024498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.029394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.029717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.029757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.034256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.034583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.034618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.039155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.039478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.039512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.043890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.044226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.044267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.048535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.048918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.048955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.053352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.053677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.053721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.057997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.058319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.058352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.062669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.063004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.063043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.067305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.067643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.067676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.072078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.072447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.072478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.076771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.077108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.077151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.081547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.081895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.081927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.086248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.086585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.086619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.091025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.091334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.091366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.095537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.095873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.095904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.100343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.100716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.100762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.105090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.105412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.105446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.109774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.110097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.110130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.114471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.114797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.114821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.119287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.119599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.119631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.124063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.124420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.124453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.128730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.129067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.129099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.133457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.133797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.133830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.138254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.138566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.138593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.143090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.143406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.143439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.147760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.148100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.285 [2024-04-25 17:26:40.152529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.285 [2024-04-25 17:26:40.152898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.285 [2024-04-25 17:26:40.152944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.157385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.157712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.157752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.162154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.162477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.162510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.166755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.167078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.167110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.171490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.171872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.176267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.176632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.181207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.181519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.181549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.185933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.186256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.186290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.190641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.191031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.195194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.195516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.200035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.200374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.200408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.204647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.205015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.205050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.209339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.209662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.209696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.214203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.214515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.214542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.218884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.219213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.219245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.223569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.223908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.223938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.228300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.228615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.228647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.233064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.233384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.233417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.237849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.238208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.238246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.242539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.242858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.242909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.247242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.247555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.247586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.251882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.252227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.252260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.286 [2024-04-25 17:26:40.256497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.286 [2024-04-25 17:26:40.256873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.286 [2024-04-25 17:26:40.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.261888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.262254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.262291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.266911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.267297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.267332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.271733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.272049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.272083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.276526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.276888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.276919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.281480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.281806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.281843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.286222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.286580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.291066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.291358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.291404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.295661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.295985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.296018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.300458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.300841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.300891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.305140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.305463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.305495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.309819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.310127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.310161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.314442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.314787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.314830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.319389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.319749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.319786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.324222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.324637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.329104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.329444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.333697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.334029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.334064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.338333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.338671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.338719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.343068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.343388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.343425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.347642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.347985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.348029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.352407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.352733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.352781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.356997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.573 [2024-04-25 17:26:40.357321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.573 [2024-04-25 17:26:40.357353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.573 [2024-04-25 17:26:40.361666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.361996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.362028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.366270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.366606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.366645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.371010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.371332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.371373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.375567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.375904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.375944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.380436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.380819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.380867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.385162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.385483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.385518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.389822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.390132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.390163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.394534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.394860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.394894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.399335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.399654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.399689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.403946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.404311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.404349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.408562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.408934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.408978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.413338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.413661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.418078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.418435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.422850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.423173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.423206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.427408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.427730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.427766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.432100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.432458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.432490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.436755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.437090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.437124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.441503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.441840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.441871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.446532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.446849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.446895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.451224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.451529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.451574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.456041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.456402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.456444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.460914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.461214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.461244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.465582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.465900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.465946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.470388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.470702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.470751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.475089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.475396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.475443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.574 [2024-04-25 17:26:40.479514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f82830) with pdu=0x2000190fef90 00:23:10.574 [2024-04-25 17:26:40.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.574 [2024-04-25 17:26:40.479660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.574 00:23:10.574 Latency(us) 00:23:10.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.574 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:10.574 nvme0n1 : 2.00 6380.14 797.52 0.00 0.00 2502.14 1407.53 6404.65 00:23:10.574 =================================================================================================================== 00:23:10.574 Total : 6380.14 797.52 0.00 0.00 2502.14 1407.53 6404.65 00:23:10.574 0 00:23:10.575 17:26:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:10.575 17:26:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:10.575 | .driver_specific 00:23:10.575 | .nvme_error 00:23:10.575 | .status_code 00:23:10.575 | .command_transient_transport_error' 00:23:10.575 17:26:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:10.575 17:26:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:10.833 17:26:40 -- host/digest.sh@71 -- # (( 412 > 0 )) 00:23:10.833 17:26:40 -- host/digest.sh@73 -- # killprocess 92960 00:23:10.833 17:26:40 -- common/autotest_common.sh@936 -- # '[' -z 92960 ']' 00:23:10.833 17:26:40 -- common/autotest_common.sh@940 -- # kill -0 92960 00:23:10.833 17:26:40 -- common/autotest_common.sh@941 -- # uname 00:23:10.833 17:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.833 17:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92960 00:23:10.833 17:26:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:10.833 17:26:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:10.833 killing process with pid 92960 00:23:10.833 17:26:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92960' 00:23:10.833 Received shutdown signal, test time was about 2.000000 seconds 00:23:10.833 00:23:10.833 Latency(us) 00:23:10.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.833 =================================================================================================================== 00:23:10.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.833 17:26:40 -- common/autotest_common.sh@955 -- # kill 92960 00:23:10.833 17:26:40 -- common/autotest_common.sh@960 -- # wait 92960 00:23:11.091 17:26:40 -- host/digest.sh@116 -- # killprocess 92669 00:23:11.091 17:26:40 -- common/autotest_common.sh@936 -- # '[' -z 92669 ']' 00:23:11.091 17:26:40 -- common/autotest_common.sh@940 -- # kill -0 92669 00:23:11.091 17:26:40 -- common/autotest_common.sh@941 -- # uname 00:23:11.091 17:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.091 17:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92669 00:23:11.091 17:26:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:11.091 killing process with pid 92669 00:23:11.091 17:26:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:11.091 17:26:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92669' 00:23:11.091 17:26:40 -- common/autotest_common.sh@955 -- # kill 92669 00:23:11.091 17:26:40 -- common/autotest_common.sh@960 -- # wait 92669 00:23:11.350 00:23:11.350 real 0m17.084s 00:23:11.350 user 0m32.458s 00:23:11.350 sys 0m4.238s 00:23:11.350 17:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:11.350 17:26:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.350 ************************************ 00:23:11.350 END TEST nvmf_digest_error 00:23:11.350 ************************************ 00:23:11.350 17:26:41 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:11.350 17:26:41 -- host/digest.sh@150 -- # nvmftestfini 00:23:11.350 17:26:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:11.350 17:26:41 -- nvmf/common.sh@117 -- # sync 00:23:11.350 17:26:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.350 17:26:41 -- nvmf/common.sh@120 -- # set +e 00:23:11.350 17:26:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.350 17:26:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.350 rmmod nvme_tcp 00:23:11.350 rmmod nvme_fabrics 00:23:11.350 rmmod nvme_keyring 00:23:11.350 17:26:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.350 17:26:41 -- nvmf/common.sh@124 -- # set -e 00:23:11.350 17:26:41 -- nvmf/common.sh@125 -- # return 0 00:23:11.350 17:26:41 -- nvmf/common.sh@478 -- # '[' -n 92669 ']' 00:23:11.350 17:26:41 -- nvmf/common.sh@479 -- # killprocess 92669 00:23:11.350 17:26:41 -- common/autotest_common.sh@936 -- # '[' -z 92669 ']' 00:23:11.350 17:26:41 -- common/autotest_common.sh@940 -- # kill -0 92669 00:23:11.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92669) - No such process 00:23:11.350 Process with pid 92669 is not found 00:23:11.350 17:26:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92669 is not found' 00:23:11.350 17:26:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:11.350 17:26:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:11.350 17:26:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:11.350 17:26:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.350 17:26:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.350 17:26:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.350 17:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.350 17:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.350 17:26:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:11.350 00:23:11.350 real 0m34.169s 00:23:11.350 user 1m3.073s 00:23:11.350 sys 0m8.786s 00:23:11.350 17:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:11.350 17:26:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.350 ************************************ 00:23:11.350 END TEST nvmf_digest 00:23:11.350 ************************************ 00:23:11.350 17:26:41 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:23:11.350 17:26:41 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:23:11.350 17:26:41 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:11.350 17:26:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:11.350 17:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:11.350 17:26:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.609 ************************************ 00:23:11.609 START TEST nvmf_mdns_discovery 00:23:11.609 ************************************ 00:23:11.609 17:26:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:11.609 * Looking for test storage... 00:23:11.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.609 17:26:41 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.609 17:26:41 -- nvmf/common.sh@7 -- # uname -s 00:23:11.609 17:26:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.609 17:26:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.609 17:26:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.609 17:26:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.609 17:26:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.609 17:26:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.609 17:26:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.609 17:26:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.609 17:26:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.609 17:26:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.609 17:26:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:23:11.609 17:26:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:23:11.609 17:26:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.609 17:26:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.609 17:26:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.609 17:26:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.609 17:26:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.609 17:26:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.609 17:26:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.609 17:26:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.609 17:26:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.609 17:26:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.609 17:26:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.609 17:26:41 -- paths/export.sh@5 -- # export PATH 00:23:11.609 17:26:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.609 17:26:41 -- nvmf/common.sh@47 -- # : 0 00:23:11.609 17:26:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.609 17:26:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.609 17:26:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.609 17:26:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.609 17:26:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.609 17:26:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.609 17:26:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.609 17:26:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.609 17:26:41 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:11.609 17:26:41 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:11.609 17:26:41 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:11.610 17:26:41 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:11.610 17:26:41 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:11.610 17:26:41 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:11.610 17:26:41 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:11.610 17:26:41 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:11.610 17:26:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:11.610 17:26:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.610 17:26:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:11.610 17:26:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:11.610 17:26:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:11.610 17:26:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.610 17:26:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.610 17:26:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.610 17:26:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:11.610 17:26:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:11.610 17:26:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:11.610 17:26:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:11.610 17:26:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:11.610 17:26:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:11.610 17:26:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.610 17:26:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.610 17:26:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.610 17:26:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:11.610 17:26:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.610 17:26:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.610 17:26:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.610 17:26:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.610 17:26:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.610 17:26:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.610 17:26:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.610 17:26:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.610 17:26:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:11.610 17:26:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:11.610 Cannot find device "nvmf_tgt_br" 00:23:11.610 17:26:41 -- nvmf/common.sh@155 -- # true 00:23:11.610 17:26:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.610 Cannot find device "nvmf_tgt_br2" 00:23:11.610 17:26:41 -- nvmf/common.sh@156 -- # true 00:23:11.610 17:26:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:11.610 17:26:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:11.610 Cannot find device "nvmf_tgt_br" 00:23:11.610 17:26:41 -- nvmf/common.sh@158 -- # true 00:23:11.610 17:26:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:11.610 Cannot find device "nvmf_tgt_br2" 00:23:11.610 17:26:41 -- nvmf/common.sh@159 -- # true 00:23:11.610 17:26:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:11.868 17:26:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:11.868 17:26:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.868 17:26:41 -- nvmf/common.sh@162 -- # true 00:23:11.868 17:26:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.868 17:26:41 -- nvmf/common.sh@163 -- # true 00:23:11.868 17:26:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.868 17:26:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.868 17:26:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.868 17:26:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.868 17:26:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.868 17:26:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.868 17:26:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.868 17:26:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.868 17:26:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.868 17:26:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:11.868 17:26:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:11.868 17:26:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:11.868 17:26:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:11.868 17:26:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.868 17:26:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.868 17:26:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.868 17:26:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:11.868 17:26:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:11.868 17:26:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.868 17:26:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.868 17:26:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.868 17:26:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.868 17:26:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.868 17:26:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:11.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:23:11.869 00:23:11.869 --- 10.0.0.2 ping statistics --- 00:23:11.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.869 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:11.869 17:26:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:11.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:11.869 00:23:11.869 --- 10.0.0.3 ping statistics --- 00:23:11.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.869 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:11.869 17:26:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:11.869 00:23:11.869 --- 10.0.0.1 ping statistics --- 00:23:11.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.869 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:11.869 17:26:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.869 17:26:41 -- nvmf/common.sh@422 -- # return 0 00:23:11.869 17:26:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:11.869 17:26:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.869 17:26:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:11.869 17:26:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:11.869 17:26:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.869 17:26:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:11.869 17:26:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:12.127 17:26:41 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:12.127 17:26:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:12.127 17:26:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:12.127 17:26:41 -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 17:26:41 -- nvmf/common.sh@470 -- # nvmfpid=93252 00:23:12.127 17:26:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:12.127 17:26:41 -- nvmf/common.sh@471 -- # waitforlisten 93252 00:23:12.127 17:26:41 -- common/autotest_common.sh@817 -- # '[' -z 93252 ']' 00:23:12.127 17:26:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.127 17:26:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:12.127 17:26:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.127 17:26:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:12.127 17:26:41 -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 [2024-04-25 17:26:41.922996] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:12.127 [2024-04-25 17:26:41.923555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.127 [2024-04-25 17:26:42.064366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.386 [2024-04-25 17:26:42.134075] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.386 [2024-04-25 17:26:42.134127] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.386 [2024-04-25 17:26:42.134141] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.386 [2024-04-25 17:26:42.134152] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.386 [2024-04-25 17:26:42.134160] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.386 [2024-04-25 17:26:42.134193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.954 17:26:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:12.954 17:26:42 -- common/autotest_common.sh@850 -- # return 0 00:23:12.954 17:26:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:12.954 17:26:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:12.954 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 17:26:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.214 17:26:42 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:13.214 17:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 17:26:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:42 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:13.214 17:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 [2024-04-25 17:26:43.016667] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 [2024-04-25 17:26:43.024759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 null0 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 null1 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 null2 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 null3 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:13.214 17:26:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 17:26:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@47 -- # hostpid=93308 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:13.214 17:26:43 -- host/mdns_discovery.sh@48 -- # waitforlisten 93308 /tmp/host.sock 00:23:13.214 17:26:43 -- common/autotest_common.sh@817 -- # '[' -z 93308 ']' 00:23:13.214 17:26:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:13.214 17:26:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:13.214 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:13.214 17:26:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:13.214 17:26:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:13.214 17:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.214 [2024-04-25 17:26:43.118333] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:13.214 [2024-04-25 17:26:43.118427] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93308 ] 00:23:13.473 [2024-04-25 17:26:43.253182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.473 [2024-04-25 17:26:43.320232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.473 17:26:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:13.473 17:26:43 -- common/autotest_common.sh@850 -- # return 0 00:23:13.473 17:26:43 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:13.473 17:26:43 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:13.473 17:26:43 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:13.731 17:26:43 -- host/mdns_discovery.sh@57 -- # avahipid=93322 00:23:13.731 17:26:43 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:13.731 17:26:43 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:13.731 17:26:43 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:13.731 Process 1004 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:13.731 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:13.731 Successfully dropped root privileges. 00:23:13.731 avahi-daemon 0.8 starting up. 00:23:13.731 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:13.731 Successfully called chroot(). 00:23:13.731 Successfully dropped remaining capabilities. 00:23:14.664 No service file found in /etc/avahi/services. 00:23:14.664 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:14.664 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:14.664 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:14.664 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:14.664 Network interface enumeration completed. 00:23:14.664 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:23:14.664 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:14.664 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:23:14.664 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:14.664 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2913736376. 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:14.664 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.664 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.664 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:14.664 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.664 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.664 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:14.664 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.664 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@68 -- # sort 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@68 -- # xargs 00:23:14.664 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.664 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.664 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@64 -- # xargs 00:23:14.664 17:26:44 -- host/mdns_discovery.sh@64 -- # sort 00:23:14.664 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:14.923 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.923 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.923 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.923 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.923 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@68 -- # sort 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@68 -- # xargs 00:23:14.923 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:14.923 17:26:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # sort 00:23:14.924 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # xargs 00:23:14.924 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:14.924 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.924 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@68 -- # sort 00:23:14.924 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.924 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@68 -- # xargs 00:23:14.924 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.924 [2024-04-25 17:26:44.831995] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.924 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.924 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # sort 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@64 -- # xargs 00:23:14.924 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:14.924 17:26:44 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.924 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.924 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [2024-04-25 17:26:44.901290] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [2024-04-25 17:26:44.941189] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:15.183 17:26:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.183 17:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.183 [2024-04-25 17:26:44.949181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:15.183 17:26:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=93374 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:15.183 17:26:44 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:16.119 [2024-04-25 17:26:45.731995] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:16.119 Established under name 'CDC' 00:23:16.378 [2024-04-25 17:26:46.132004] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:16.378 [2024-04-25 17:26:46.132029] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:23:16.378 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:16.378 cookie is 0 00:23:16.378 is_local: 1 00:23:16.378 our_own: 0 00:23:16.378 wide_area: 0 00:23:16.378 multicast: 1 00:23:16.378 cached: 1 00:23:16.378 [2024-04-25 17:26:46.231998] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:16.378 [2024-04-25 17:26:46.232022] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:23:16.378 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:16.378 cookie is 0 00:23:16.378 is_local: 1 00:23:16.378 our_own: 0 00:23:16.378 wide_area: 0 00:23:16.378 multicast: 1 00:23:16.378 cached: 1 00:23:17.312 [2024-04-25 17:26:47.140402] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:17.312 [2024-04-25 17:26:47.140426] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:17.312 [2024-04-25 17:26:47.140459] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.312 [2024-04-25 17:26:47.226521] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:17.312 [2024-04-25 17:26:47.240287] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.312 [2024-04-25 17:26:47.240323] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.312 [2024-04-25 17:26:47.240354] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.312 [2024-04-25 17:26:47.287898] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:17.312 [2024-04-25 17:26:47.287944] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:17.570 [2024-04-25 17:26:47.326049] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:17.570 [2024-04-25 17:26:47.380789] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:17.570 [2024-04-25 17:26:47.380815] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:20.099 17:26:49 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:20.099 17:26:49 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:20.099 17:26:49 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:20.099 17:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.099 17:26:49 -- host/mdns_discovery.sh@80 -- # sort 00:23:20.099 17:26:49 -- common/autotest_common.sh@10 -- # set +x 00:23:20.099 17:26:49 -- host/mdns_discovery.sh@80 -- # xargs 00:23:20.099 17:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@76 -- # sort 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@76 -- # xargs 00:23:20.099 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.099 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.099 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:20.099 17:26:50 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.358 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.358 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.358 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.358 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.358 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.358 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:20.358 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.358 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # xargs 00:23:20.358 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:20.358 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.358 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # xargs 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.358 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:20.358 17:26:50 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:20.358 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.358 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.358 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:20.617 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.617 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.617 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:20.617 17:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.617 17:26:50 -- common/autotest_common.sh@10 -- # set +x 00:23:20.617 17:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.617 17:26:50 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.552 17:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:21.552 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@64 -- # sort 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@64 -- # xargs 00:23:21.552 17:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:21.552 17:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.552 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:21.552 17:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:21.552 17:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.552 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:23:21.552 [2024-04-25 17:26:51.483848] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.552 [2024-04-25 17:26:51.484346] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:21.552 [2024-04-25 17:26:51.484373] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.552 [2024-04-25 17:26:51.484405] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:21.552 [2024-04-25 17:26:51.484418] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:21.552 17:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:21.552 17:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.552 17:26:51 -- common/autotest_common.sh@10 -- # set +x 00:23:21.552 [2024-04-25 17:26:51.491794] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:21.552 [2024-04-25 17:26:51.492351] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:21.552 [2024-04-25 17:26:51.492408] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:21.552 17:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.552 17:26:51 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:21.811 [2024-04-25 17:26:51.623460] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:21.811 [2024-04-25 17:26:51.623613] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:21.811 [2024-04-25 17:26:51.683697] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:21.811 [2024-04-25 17:26:51.683741] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:21.811 [2024-04-25 17:26:51.683748] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:21.811 [2024-04-25 17:26:51.683763] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.811 [2024-04-25 17:26:51.683805] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:21.811 [2024-04-25 17:26:51.683813] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:21.811 [2024-04-25 17:26:51.683829] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:21.811 [2024-04-25 17:26:51.683841] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:21.811 [2024-04-25 17:26:51.729548] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:21.811 [2024-04-25 17:26:51.729564] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:21.811 [2024-04-25 17:26:51.729599] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:21.811 [2024-04-25 17:26:51.729607] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:22.747 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.747 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@68 -- # sort 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@68 -- # xargs 00:23:22.747 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.747 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.747 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.747 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.747 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.747 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.747 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.747 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.747 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.747 17:26:52 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.747 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:23.008 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.008 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:23.008 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:23.008 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.008 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:23.008 [2024-04-25 17:26:52.809195] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.008 [2024-04-25 17:26:52.809224] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.008 [2024-04-25 17:26:52.809254] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.008 [2024-04-25 17:26:52.809266] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.008 [2024-04-25 17:26:52.810248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.810276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.810304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.810312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.810320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.810328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.810336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.810351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.008 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:23.008 17:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.008 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:23:23.008 [2024-04-25 17:26:52.817218] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.008 [2024-04-25 17:26:52.817266] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.008 [2024-04-25 17:26:52.820172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.008 17:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.008 17:26:52 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:23.008 [2024-04-25 17:26:52.826389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.826590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.826802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.827040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.827238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.827308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.827427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.008 [2024-04-25 17:26:52.827523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.008 [2024-04-25 17:26:52.827538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.830195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.008 [2024-04-25 17:26:52.830433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.830484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.830501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.008 [2024-04-25 17:26:52.830511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.830528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.830541] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.008 [2024-04-25 17:26:52.830549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.008 [2024-04-25 17:26:52.830559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.008 [2024-04-25 17:26:52.830574] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.008 [2024-04-25 17:26:52.836357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.840381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.008 [2024-04-25 17:26:52.840471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.840514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.840529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.008 [2024-04-25 17:26:52.840538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.840552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.840564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.008 [2024-04-25 17:26:52.840571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.008 [2024-04-25 17:26:52.840580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.008 [2024-04-25 17:26:52.840592] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.008 [2024-04-25 17:26:52.846368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.008 [2024-04-25 17:26:52.846457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.846498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.846512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.008 [2024-04-25 17:26:52.846520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.846534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.846545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.008 [2024-04-25 17:26:52.846552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.008 [2024-04-25 17:26:52.846560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.008 [2024-04-25 17:26:52.846572] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.008 [2024-04-25 17:26:52.850442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.008 [2024-04-25 17:26:52.850527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.850567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.850582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.008 [2024-04-25 17:26:52.850590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.850603] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.850615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.008 [2024-04-25 17:26:52.850622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.008 [2024-04-25 17:26:52.850630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.008 [2024-04-25 17:26:52.850656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.008 [2024-04-25 17:26:52.856430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.008 [2024-04-25 17:26:52.856520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.856562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.008 [2024-04-25 17:26:52.856578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.008 [2024-04-25 17:26:52.856587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.008 [2024-04-25 17:26:52.856612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.008 [2024-04-25 17:26:52.856639] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.856646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.856654] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.009 [2024-04-25 17:26:52.856681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.860507] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.009 [2024-04-25 17:26:52.860793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.861059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.861084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.009 [2024-04-25 17:26:52.861094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.861132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.861148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.861156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.861164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.009 [2024-04-25 17:26:52.861179] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.866495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.009 [2024-04-25 17:26:52.866592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.866635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.866649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.009 [2024-04-25 17:26:52.866658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.866672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.866684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.866691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.866699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.009 [2024-04-25 17:26:52.866712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.870754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.009 [2024-04-25 17:26:52.870842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.870883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.870897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.009 [2024-04-25 17:26:52.870906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.870933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.870946] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.870954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.870961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.009 [2024-04-25 17:26:52.870973] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.876560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.009 [2024-04-25 17:26:52.876644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.876685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.876699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.009 [2024-04-25 17:26:52.876708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.876764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.876779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.876787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.876795] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.009 [2024-04-25 17:26:52.876808] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.880814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.009 [2024-04-25 17:26:52.880899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.880939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.880954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.009 [2024-04-25 17:26:52.880962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.880991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.881005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.881012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.881020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.009 [2024-04-25 17:26:52.881032] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.886617] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.009 [2024-04-25 17:26:52.886702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.886775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.886792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.009 [2024-04-25 17:26:52.886801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.886815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.886843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.886858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.886866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.009 [2024-04-25 17:26:52.886879] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.890874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.009 [2024-04-25 17:26:52.890960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.891000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.891014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.009 [2024-04-25 17:26:52.891022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.891049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.891062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.891070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.891077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.009 [2024-04-25 17:26:52.891090] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.896677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.009 [2024-04-25 17:26:52.896801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.896844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.896859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.009 [2024-04-25 17:26:52.896867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.896881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.896893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.896901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.896909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.009 [2024-04-25 17:26:52.896922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.900933] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.009 [2024-04-25 17:26:52.901036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.901077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.009 [2024-04-25 17:26:52.901107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.009 [2024-04-25 17:26:52.901116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.009 [2024-04-25 17:26:52.901129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.009 [2024-04-25 17:26:52.901141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.009 [2024-04-25 17:26:52.901148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.009 [2024-04-25 17:26:52.901156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.009 [2024-04-25 17:26:52.901167] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.009 [2024-04-25 17:26:52.906758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.009 [2024-04-25 17:26:52.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.906890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.906905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.010 [2024-04-25 17:26:52.906913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.906927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.906938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.906945] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.906953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.010 [2024-04-25 17:26:52.906965] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.910993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.010 [2024-04-25 17:26:52.911090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.911131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.911146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.010 [2024-04-25 17:26:52.911154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.911168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.911180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.911187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.911195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.010 [2024-04-25 17:26:52.911207] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.916820] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.010 [2024-04-25 17:26:52.916906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.916947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.916962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.010 [2024-04-25 17:26:52.916970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.916984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.917010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.917020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.917028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.010 [2024-04-25 17:26:52.917040] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.921056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.010 [2024-04-25 17:26:52.921157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.921197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.921211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.010 [2024-04-25 17:26:52.921219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.921233] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.921251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.921258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.921266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.010 [2024-04-25 17:26:52.921277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.926879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.010 [2024-04-25 17:26:52.926972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.927013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.927027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.010 [2024-04-25 17:26:52.927036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.927049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.927074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.927083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.927091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.010 [2024-04-25 17:26:52.927103] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.931113] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.010 [2024-04-25 17:26:52.931202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.931242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.931257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.010 [2024-04-25 17:26:52.931265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.931278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.931290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.931305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.931312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.010 [2024-04-25 17:26:52.931324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.936940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.010 [2024-04-25 17:26:52.937019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.937061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.937075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.010 [2024-04-25 17:26:52.937084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.937113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.937137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.937146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.937153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.010 [2024-04-25 17:26:52.937166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.941171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:23.010 [2024-04-25 17:26:52.941255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.941298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.941312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68ade0 with addr=10.0.0.2, port=4420 00:23:23.010 [2024-04-25 17:26:52.941321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68ade0 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.941334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ade0 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.941345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.941352] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.941360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:23.010 [2024-04-25 17:26:52.941372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.946987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:23.010 [2024-04-25 17:26:52.947073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.947113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.010 [2024-04-25 17:26:52.947127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x621c20 with addr=10.0.0.3, port=4420 00:23:23.010 [2024-04-25 17:26:52.947136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621c20 is same with the state(5) to be set 00:23:23.010 [2024-04-25 17:26:52.947149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x621c20 (9): Bad file descriptor 00:23:23.010 [2024-04-25 17:26:52.947174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:23.010 [2024-04-25 17:26:52.947188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:23.010 [2024-04-25 17:26:52.947196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:23.010 [2024-04-25 17:26:52.947208] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:23.010 [2024-04-25 17:26:52.949170] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:23.010 [2024-04-25 17:26:52.949194] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.010 [2024-04-25 17:26:52.949211] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.011 [2024-04-25 17:26:52.949240] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:23.011 [2024-04-25 17:26:52.949252] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:23.011 [2024-04-25 17:26:52.949264] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.269 [2024-04-25 17:26:53.035266] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.269 [2024-04-25 17:26:53.035315] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:24.203 17:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@68 -- # xargs 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.203 17:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.203 17:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.203 17:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.203 17:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.203 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.203 17:26:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:53 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.203 17:26:53 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:24.203 17:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:24.203 17:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.203 17:26:54 -- common/autotest_common.sh@10 -- # set +x 00:23:24.203 17:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.203 17:26:54 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:24.203 [2024-04-25 17:26:54.132148] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:25.137 17:26:55 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:25.137 17:26:55 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:25.137 17:26:55 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:25.137 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.137 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.137 17:26:55 -- host/mdns_discovery.sh@80 -- # sort 00:23:25.137 17:26:55 -- host/mdns_discovery.sh@80 -- # xargs 00:23:25.137 17:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@68 -- # sort 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@68 -- # xargs 00:23:25.396 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.396 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.396 17:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:25.396 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@64 -- # sort 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@64 -- # xargs 00:23:25.396 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.396 17:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:25.396 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.396 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.396 17:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:25.396 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.396 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.396 17:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:25.396 17:26:55 -- common/autotest_common.sh@638 -- # local es=0 00:23:25.396 17:26:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:25.396 17:26:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:25.396 17:26:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:25.396 17:26:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:25.396 17:26:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:25.396 17:26:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:25.396 17:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.396 17:26:55 -- common/autotest_common.sh@10 -- # set +x 00:23:25.396 [2024-04-25 17:26:55.332392] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:25.396 2024/04/25 17:26:55 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:25.396 request: 00:23:25.396 { 00:23:25.396 "method": "bdev_nvme_start_mdns_discovery", 00:23:25.396 "params": { 00:23:25.396 "name": "mdns", 00:23:25.396 "svcname": "_nvme-disc._http", 00:23:25.396 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:25.396 } 00:23:25.396 } 00:23:25.396 Got JSON-RPC error response 00:23:25.396 GoRPCClient: error on JSON-RPC call 00:23:25.396 17:26:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:25.396 17:26:55 -- common/autotest_common.sh@641 -- # es=1 00:23:25.396 17:26:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:25.396 17:26:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:25.396 17:26:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:25.396 17:26:55 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:25.994 [2024-04-25 17:26:55.716976] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:25.994 [2024-04-25 17:26:55.816973] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:25.994 [2024-04-25 17:26:55.916979] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:25.994 [2024-04-25 17:26:55.916998] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:23:25.994 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:25.994 cookie is 0 00:23:25.994 is_local: 1 00:23:25.994 our_own: 0 00:23:25.994 wide_area: 0 00:23:25.994 multicast: 1 00:23:25.994 cached: 1 00:23:26.265 [2024-04-25 17:26:56.016981] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:26.265 [2024-04-25 17:26:56.017002] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:23:26.265 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:26.265 cookie is 0 00:23:26.265 is_local: 1 00:23:26.265 our_own: 0 00:23:26.265 wide_area: 0 00:23:26.265 multicast: 1 00:23:26.265 cached: 1 00:23:27.200 [2024-04-25 17:26:56.922272] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:27.201 [2024-04-25 17:26:56.922297] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:27.201 [2024-04-25 17:26:56.922332] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:27.201 [2024-04-25 17:26:57.008384] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:27.201 [2024-04-25 17:26:57.022017] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.201 [2024-04-25 17:26:57.022038] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.201 [2024-04-25 17:26:57.022069] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.201 [2024-04-25 17:26:57.069987] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:27.201 [2024-04-25 17:26:57.070018] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:27.201 [2024-04-25 17:26:57.108265] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:27.201 [2024-04-25 17:26:57.166860] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:27.201 [2024-04-25 17:26:57.166902] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:30.486 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.486 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@80 -- # sort 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@80 -- # xargs 00:23:30.486 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.486 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.486 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@76 -- # sort 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@76 -- # xargs 00:23:30.486 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:30.486 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.486 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@64 -- # sort 00:23:30.486 17:27:00 -- host/mdns_discovery.sh@64 -- # xargs 00:23:30.745 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.745 17:27:00 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:30.745 17:27:00 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:30.745 17:27:00 -- common/autotest_common.sh@638 -- # local es=0 00:23:30.745 17:27:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:30.745 17:27:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:30.745 17:27:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.745 17:27:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:30.745 17:27:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.745 17:27:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:30.745 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.745 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.745 [2024-04-25 17:27:00.520073] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:30.745 2024/04/25 17:27:00 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:30.745 request: 00:23:30.745 { 00:23:30.745 "method": "bdev_nvme_start_mdns_discovery", 00:23:30.745 "params": { 00:23:30.745 "name": "cdc", 00:23:30.745 "svcname": "_nvme-disc._tcp", 00:23:30.745 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:30.745 } 00:23:30.745 } 00:23:30.745 Got JSON-RPC error response 00:23:30.745 GoRPCClient: error on JSON-RPC call 00:23:30.745 17:27:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:30.745 17:27:00 -- common/autotest_common.sh@641 -- # es=1 00:23:30.745 17:27:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:30.745 17:27:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:30.745 17:27:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.746 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:30.746 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@76 -- # sort 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@76 -- # xargs 00:23:30.746 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.746 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@64 -- # sort 00:23:30.746 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@64 -- # xargs 00:23:30.746 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:30.746 17:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.746 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.746 17:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@197 -- # kill 93308 00:23:30.746 17:27:00 -- host/mdns_discovery.sh@200 -- # wait 93308 00:23:30.746 [2024-04-25 17:27:00.716985] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:31.005 17:27:00 -- host/mdns_discovery.sh@201 -- # kill 93374 00:23:31.005 Got SIGTERM, quitting. 00:23:31.005 17:27:00 -- host/mdns_discovery.sh@202 -- # kill 93322 00:23:31.005 17:27:00 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:31.005 17:27:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:31.005 17:27:00 -- nvmf/common.sh@117 -- # sync 00:23:31.005 Got SIGTERM, quitting. 00:23:31.005 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:31.005 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:31.005 avahi-daemon 0.8 exiting. 00:23:31.005 17:27:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.005 17:27:00 -- nvmf/common.sh@120 -- # set +e 00:23:31.005 17:27:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.005 17:27:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.005 rmmod nvme_tcp 00:23:31.005 rmmod nvme_fabrics 00:23:31.005 rmmod nvme_keyring 00:23:31.005 17:27:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.005 17:27:00 -- nvmf/common.sh@124 -- # set -e 00:23:31.005 17:27:00 -- nvmf/common.sh@125 -- # return 0 00:23:31.005 17:27:00 -- nvmf/common.sh@478 -- # '[' -n 93252 ']' 00:23:31.005 17:27:00 -- nvmf/common.sh@479 -- # killprocess 93252 00:23:31.005 17:27:00 -- common/autotest_common.sh@936 -- # '[' -z 93252 ']' 00:23:31.005 17:27:00 -- common/autotest_common.sh@940 -- # kill -0 93252 00:23:31.005 17:27:00 -- common/autotest_common.sh@941 -- # uname 00:23:31.005 17:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.005 17:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93252 00:23:31.005 17:27:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:31.005 killing process with pid 93252 00:23:31.005 17:27:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:31.005 17:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93252' 00:23:31.005 17:27:00 -- common/autotest_common.sh@955 -- # kill 93252 00:23:31.005 17:27:00 -- common/autotest_common.sh@960 -- # wait 93252 00:23:31.265 17:27:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:31.265 17:27:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:31.265 17:27:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:31.265 17:27:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.265 17:27:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.265 17:27:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.265 17:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.265 17:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.265 17:27:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:31.265 00:23:31.265 real 0m19.773s 00:23:31.265 user 0m38.730s 00:23:31.265 sys 0m1.840s 00:23:31.265 ************************************ 00:23:31.265 END TEST nvmf_mdns_discovery 00:23:31.265 ************************************ 00:23:31.265 17:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:31.265 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.265 17:27:01 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:23:31.265 17:27:01 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:31.265 17:27:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:31.265 17:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:31.265 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.524 ************************************ 00:23:31.524 START TEST nvmf_multipath 00:23:31.524 ************************************ 00:23:31.524 17:27:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:31.524 * Looking for test storage... 00:23:31.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:31.524 17:27:01 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:31.524 17:27:01 -- nvmf/common.sh@7 -- # uname -s 00:23:31.524 17:27:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.524 17:27:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.524 17:27:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.524 17:27:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.524 17:27:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.524 17:27:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.524 17:27:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.524 17:27:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.524 17:27:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.524 17:27:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:23:31.524 17:27:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:23:31.524 17:27:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.524 17:27:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.524 17:27:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:31.524 17:27:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.524 17:27:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:31.524 17:27:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.524 17:27:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.524 17:27:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.524 17:27:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.524 17:27:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.524 17:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.524 17:27:01 -- paths/export.sh@5 -- # export PATH 00:23:31.524 17:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.524 17:27:01 -- nvmf/common.sh@47 -- # : 0 00:23:31.524 17:27:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.524 17:27:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.524 17:27:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.524 17:27:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.524 17:27:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.524 17:27:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.524 17:27:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.524 17:27:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.524 17:27:01 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.524 17:27:01 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.524 17:27:01 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:31.524 17:27:01 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:31.524 17:27:01 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.524 17:27:01 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:31.524 17:27:01 -- host/multipath.sh@30 -- # nvmftestinit 00:23:31.524 17:27:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:31.524 17:27:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.524 17:27:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:31.524 17:27:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:31.524 17:27:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:31.524 17:27:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.524 17:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.524 17:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.524 17:27:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:31.524 17:27:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:31.524 17:27:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.525 17:27:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.525 17:27:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:31.525 17:27:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:31.525 17:27:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:31.525 17:27:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:31.525 17:27:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:31.525 17:27:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.525 17:27:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:31.525 17:27:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:31.525 17:27:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:31.525 17:27:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:31.525 17:27:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:31.525 17:27:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:31.525 Cannot find device "nvmf_tgt_br" 00:23:31.525 17:27:01 -- nvmf/common.sh@155 -- # true 00:23:31.525 17:27:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:31.525 Cannot find device "nvmf_tgt_br2" 00:23:31.525 17:27:01 -- nvmf/common.sh@156 -- # true 00:23:31.525 17:27:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:31.525 17:27:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:31.525 Cannot find device "nvmf_tgt_br" 00:23:31.525 17:27:01 -- nvmf/common.sh@158 -- # true 00:23:31.525 17:27:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:31.525 Cannot find device "nvmf_tgt_br2" 00:23:31.525 17:27:01 -- nvmf/common.sh@159 -- # true 00:23:31.525 17:27:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:31.525 17:27:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:31.784 17:27:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:31.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.784 17:27:01 -- nvmf/common.sh@162 -- # true 00:23:31.784 17:27:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:31.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.784 17:27:01 -- nvmf/common.sh@163 -- # true 00:23:31.784 17:27:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:31.784 17:27:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:31.784 17:27:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:31.784 17:27:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:31.784 17:27:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:31.784 17:27:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:31.784 17:27:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:31.784 17:27:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:31.784 17:27:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:31.784 17:27:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:31.784 17:27:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:31.784 17:27:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:31.784 17:27:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:31.784 17:27:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:31.784 17:27:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:31.784 17:27:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:31.784 17:27:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:31.784 17:27:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:31.784 17:27:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:31.784 17:27:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:31.784 17:27:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:31.784 17:27:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:31.784 17:27:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:31.784 17:27:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:31.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:23:31.784 00:23:31.784 --- 10.0.0.2 ping statistics --- 00:23:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.785 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:31.785 17:27:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:31.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:31.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:23:31.785 00:23:31.785 --- 10.0.0.3 ping statistics --- 00:23:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.785 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:31.785 17:27:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:31.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:31.785 00:23:31.785 --- 10.0.0.1 ping statistics --- 00:23:31.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.785 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:31.785 17:27:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.785 17:27:01 -- nvmf/common.sh@422 -- # return 0 00:23:31.785 17:27:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:31.785 17:27:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.785 17:27:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:31.785 17:27:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:31.785 17:27:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.785 17:27:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:31.785 17:27:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:31.785 17:27:01 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:31.785 17:27:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:31.785 17:27:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:31.785 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.785 17:27:01 -- nvmf/common.sh@470 -- # nvmfpid=93882 00:23:31.785 17:27:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:31.785 17:27:01 -- nvmf/common.sh@471 -- # waitforlisten 93882 00:23:31.785 17:27:01 -- common/autotest_common.sh@817 -- # '[' -z 93882 ']' 00:23:31.785 17:27:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.785 17:27:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:31.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.785 17:27:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.785 17:27:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:31.785 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:23:32.044 [2024-04-25 17:27:01.768767] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:23:32.044 [2024-04-25 17:27:01.768852] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.044 [2024-04-25 17:27:01.905942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.044 [2024-04-25 17:27:01.956990] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.044 [2024-04-25 17:27:01.957057] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.044 [2024-04-25 17:27:01.957083] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.044 [2024-04-25 17:27:01.957090] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.044 [2024-04-25 17:27:01.957096] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.044 [2024-04-25 17:27:01.957972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.044 [2024-04-25 17:27:01.957992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.303 17:27:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:32.303 17:27:02 -- common/autotest_common.sh@850 -- # return 0 00:23:32.303 17:27:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:32.303 17:27:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.303 17:27:02 -- common/autotest_common.sh@10 -- # set +x 00:23:32.303 17:27:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.303 17:27:02 -- host/multipath.sh@33 -- # nvmfapp_pid=93882 00:23:32.303 17:27:02 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.563 [2024-04-25 17:27:02.340561] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.563 17:27:02 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:32.822 Malloc0 00:23:32.822 17:27:02 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:33.081 17:27:02 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.349 17:27:03 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.349 [2024-04-25 17:27:03.306206] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.607 17:27:03 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.607 [2024-04-25 17:27:03.570363] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.865 17:27:03 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:33.865 17:27:03 -- host/multipath.sh@44 -- # bdevperf_pid=93971 00:23:33.865 17:27:03 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.865 17:27:03 -- host/multipath.sh@47 -- # waitforlisten 93971 /var/tmp/bdevperf.sock 00:23:33.865 17:27:03 -- common/autotest_common.sh@817 -- # '[' -z 93971 ']' 00:23:33.865 17:27:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.865 17:27:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:33.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.865 17:27:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.865 17:27:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:33.865 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:23:34.797 17:27:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:34.797 17:27:04 -- common/autotest_common.sh@850 -- # return 0 00:23:34.797 17:27:04 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:34.797 17:27:04 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:35.363 Nvme0n1 00:23:35.363 17:27:05 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:35.620 Nvme0n1 00:23:35.620 17:27:05 -- host/multipath.sh@78 -- # sleep 1 00:23:35.620 17:27:05 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:36.553 17:27:06 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:36.553 17:27:06 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.810 17:27:06 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:37.069 17:27:06 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:37.069 17:27:06 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:37.069 17:27:06 -- host/multipath.sh@65 -- # dtrace_pid=94060 00:23:37.069 17:27:06 -- host/multipath.sh@66 -- # sleep 6 00:23:43.632 17:27:12 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:43.632 17:27:12 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:43.632 17:27:13 -- host/multipath.sh@67 -- # active_port=4421 00:23:43.632 17:27:13 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.632 Attaching 4 probes... 00:23:43.632 @path[10.0.0.2, 4421]: 19336 00:23:43.632 @path[10.0.0.2, 4421]: 20213 00:23:43.632 @path[10.0.0.2, 4421]: 19595 00:23:43.632 @path[10.0.0.2, 4421]: 19963 00:23:43.632 @path[10.0.0.2, 4421]: 19707 00:23:43.633 17:27:13 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:43.633 17:27:13 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:43.633 17:27:13 -- host/multipath.sh@69 -- # sed -n 1p 00:23:43.633 17:27:13 -- host/multipath.sh@69 -- # port=4421 00:23:43.633 17:27:13 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.633 17:27:13 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.633 17:27:13 -- host/multipath.sh@72 -- # kill 94060 00:23:43.633 17:27:13 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:43.633 17:27:13 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:43.633 17:27:13 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.633 17:27:13 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.891 17:27:13 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:43.891 17:27:13 -- host/multipath.sh@65 -- # dtrace_pid=94190 00:23:43.891 17:27:13 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.891 17:27:13 -- host/multipath.sh@66 -- # sleep 6 00:23:50.451 17:27:19 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:50.451 17:27:19 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:50.451 17:27:19 -- host/multipath.sh@67 -- # active_port=4420 00:23:50.451 17:27:19 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.451 Attaching 4 probes... 00:23:50.451 @path[10.0.0.2, 4420]: 19530 00:23:50.451 @path[10.0.0.2, 4420]: 20027 00:23:50.451 @path[10.0.0.2, 4420]: 20111 00:23:50.451 @path[10.0.0.2, 4420]: 19927 00:23:50.451 @path[10.0.0.2, 4420]: 19985 00:23:50.451 17:27:19 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:50.451 17:27:19 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:50.451 17:27:19 -- host/multipath.sh@69 -- # sed -n 1p 00:23:50.451 17:27:19 -- host/multipath.sh@69 -- # port=4420 00:23:50.451 17:27:19 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.451 17:27:19 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:50.451 17:27:19 -- host/multipath.sh@72 -- # kill 94190 00:23:50.451 17:27:19 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.451 17:27:19 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:50.451 17:27:19 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:50.451 17:27:20 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:50.710 17:27:20 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:50.710 17:27:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:50.710 17:27:20 -- host/multipath.sh@65 -- # dtrace_pid=94321 00:23:50.710 17:27:20 -- host/multipath.sh@66 -- # sleep 6 00:23:57.276 17:27:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:57.276 17:27:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:57.276 17:27:26 -- host/multipath.sh@67 -- # active_port=4421 00:23:57.276 17:27:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.276 Attaching 4 probes... 00:23:57.276 @path[10.0.0.2, 4421]: 14553 00:23:57.276 @path[10.0.0.2, 4421]: 19347 00:23:57.276 @path[10.0.0.2, 4421]: 19513 00:23:57.276 @path[10.0.0.2, 4421]: 19657 00:23:57.276 @path[10.0.0.2, 4421]: 19593 00:23:57.276 17:27:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:57.276 17:27:26 -- host/multipath.sh@69 -- # sed -n 1p 00:23:57.276 17:27:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:57.276 17:27:26 -- host/multipath.sh@69 -- # port=4421 00:23:57.276 17:27:26 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.276 17:27:26 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.276 17:27:26 -- host/multipath.sh@72 -- # kill 94321 00:23:57.276 17:27:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.276 17:27:26 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:57.276 17:27:26 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.276 17:27:27 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:57.536 17:27:27 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:57.536 17:27:27 -- host/multipath.sh@65 -- # dtrace_pid=94457 00:23:57.536 17:27:27 -- host/multipath.sh@66 -- # sleep 6 00:23:57.536 17:27:27 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:04.133 17:27:33 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:04.133 17:27:33 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:04.133 17:27:33 -- host/multipath.sh@67 -- # active_port= 00:24:04.133 17:27:33 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.133 Attaching 4 probes... 00:24:04.133 00:24:04.133 00:24:04.133 00:24:04.133 00:24:04.133 00:24:04.133 17:27:33 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:04.133 17:27:33 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:04.133 17:27:33 -- host/multipath.sh@69 -- # sed -n 1p 00:24:04.133 17:27:33 -- host/multipath.sh@69 -- # port= 00:24:04.133 17:27:33 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:04.133 17:27:33 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:04.133 17:27:33 -- host/multipath.sh@72 -- # kill 94457 00:24:04.133 17:27:33 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.133 17:27:33 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:04.133 17:27:33 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.133 17:27:33 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.133 17:27:34 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:04.133 17:27:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:04.133 17:27:34 -- host/multipath.sh@65 -- # dtrace_pid=94582 00:24:04.133 17:27:34 -- host/multipath.sh@66 -- # sleep 6 00:24:10.695 17:27:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:10.695 17:27:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:10.695 17:27:40 -- host/multipath.sh@67 -- # active_port=4421 00:24:10.695 17:27:40 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.695 Attaching 4 probes... 00:24:10.695 @path[10.0.0.2, 4421]: 19450 00:24:10.695 @path[10.0.0.2, 4421]: 19470 00:24:10.695 @path[10.0.0.2, 4421]: 19146 00:24:10.695 @path[10.0.0.2, 4421]: 19110 00:24:10.695 @path[10.0.0.2, 4421]: 19135 00:24:10.695 17:27:40 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:10.695 17:27:40 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:10.695 17:27:40 -- host/multipath.sh@69 -- # sed -n 1p 00:24:10.695 17:27:40 -- host/multipath.sh@69 -- # port=4421 00:24:10.695 17:27:40 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:10.695 17:27:40 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:10.695 17:27:40 -- host/multipath.sh@72 -- # kill 94582 00:24:10.695 17:27:40 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.695 17:27:40 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:10.695 [2024-04-25 17:27:40.531868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.531994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.695 [2024-04-25 17:27:40.532327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 [2024-04-25 17:27:40.532400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb697f0 is same with the state(5) to be set 00:24:10.696 17:27:40 -- host/multipath.sh@101 -- # sleep 1 00:24:11.631 17:27:41 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:11.631 17:27:41 -- host/multipath.sh@65 -- # dtrace_pid=94719 00:24:11.631 17:27:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:11.631 17:27:41 -- host/multipath.sh@66 -- # sleep 6 00:24:18.193 17:27:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:18.193 17:27:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:18.193 17:27:47 -- host/multipath.sh@67 -- # active_port=4420 00:24:18.193 17:27:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.193 Attaching 4 probes... 00:24:18.193 @path[10.0.0.2, 4420]: 19273 00:24:18.193 @path[10.0.0.2, 4420]: 19461 00:24:18.193 @path[10.0.0.2, 4420]: 19459 00:24:18.193 @path[10.0.0.2, 4420]: 19544 00:24:18.193 @path[10.0.0.2, 4420]: 19561 00:24:18.193 17:27:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:18.193 17:27:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:18.193 17:27:47 -- host/multipath.sh@69 -- # sed -n 1p 00:24:18.193 17:27:47 -- host/multipath.sh@69 -- # port=4420 00:24:18.193 17:27:47 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.193 17:27:47 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.193 17:27:47 -- host/multipath.sh@72 -- # kill 94719 00:24:18.193 17:27:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:18.193 17:27:47 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:18.193 [2024-04-25 17:27:48.064709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.193 17:27:48 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.452 17:27:48 -- host/multipath.sh@111 -- # sleep 6 00:24:25.015 17:27:54 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:25.015 17:27:54 -- host/multipath.sh@65 -- # dtrace_pid=94910 00:24:25.015 17:27:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93882 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:25.015 17:27:54 -- host/multipath.sh@66 -- # sleep 6 00:24:31.591 17:28:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:31.591 17:28:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:31.591 17:28:00 -- host/multipath.sh@67 -- # active_port=4421 00:24:31.591 17:28:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.591 Attaching 4 probes... 00:24:31.591 @path[10.0.0.2, 4421]: 18860 00:24:31.591 @path[10.0.0.2, 4421]: 19418 00:24:31.591 @path[10.0.0.2, 4421]: 19463 00:24:31.591 @path[10.0.0.2, 4421]: 19305 00:24:31.591 @path[10.0.0.2, 4421]: 19343 00:24:31.591 17:28:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:31.591 17:28:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:31.591 17:28:00 -- host/multipath.sh@69 -- # sed -n 1p 00:24:31.591 17:28:00 -- host/multipath.sh@69 -- # port=4421 00:24:31.591 17:28:00 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:31.591 17:28:00 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:31.591 17:28:00 -- host/multipath.sh@72 -- # kill 94910 00:24:31.591 17:28:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.591 17:28:00 -- host/multipath.sh@114 -- # killprocess 93971 00:24:31.591 17:28:00 -- common/autotest_common.sh@936 -- # '[' -z 93971 ']' 00:24:31.591 17:28:00 -- common/autotest_common.sh@940 -- # kill -0 93971 00:24:31.591 17:28:00 -- common/autotest_common.sh@941 -- # uname 00:24:31.591 17:28:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.591 17:28:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93971 00:24:31.591 17:28:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:31.591 killing process with pid 93971 00:24:31.591 17:28:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:31.591 17:28:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93971' 00:24:31.591 17:28:00 -- common/autotest_common.sh@955 -- # kill 93971 00:24:31.591 17:28:00 -- common/autotest_common.sh@960 -- # wait 93971 00:24:31.591 Connection closed with partial response: 00:24:31.591 00:24:31.591 00:24:31.591 17:28:00 -- host/multipath.sh@116 -- # wait 93971 00:24:31.591 17:28:00 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:31.591 [2024-04-25 17:27:03.630367] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:24:31.591 [2024-04-25 17:27:03.630442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93971 ] 00:24:31.591 [2024-04-25 17:27:03.761497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.591 [2024-04-25 17:27:03.818446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.591 Running I/O for 90 seconds... 00:24:31.591 [2024-04-25 17:27:13.690920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.690983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.691929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.691945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.693590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.591 [2024-04-25 17:27:13.693605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.591 [2024-04-25 17:27:13.695239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.695312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.695971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.695985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.592 [2024-04-25 17:27:13.696546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.696938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.696963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.696989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.697011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.697027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.697090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.592 [2024-04-25 17:27:13.697109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.592 [2024-04-25 17:27:13.697122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.697142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.593 [2024-04-25 17:27:13.697155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.593 [2024-04-25 17:27:13.700248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.593 [2024-04-25 17:27:13.700332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.700971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.700990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.593 [2024-04-25 17:27:13.701564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.593 [2024-04-25 17:27:13.701577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.701968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.701987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:13.702259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.594 [2024-04-25 17:27:13.702278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.226470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.226483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.594 [2024-04-25 17:27:20.227541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.594 [2024-04-25 17:27:20.227554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.227956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.227979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.595 [2024-04-25 17:27:20.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.595 [2024-04-25 17:27:20.228871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.228895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.228910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.228935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.228951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.229522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.229564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.229599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.229993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.230865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.230881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.596 [2024-04-25 17:27:20.231023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.231054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.231086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.596 [2024-04-25 17:27:20.231141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.596 [2024-04-25 17:27:20.231170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.231976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.231992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:20.232580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.597 [2024-04-25 17:27:20.232611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:31.597 [2024-04-25 17:27:27.261768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.597 [2024-04-25 17:27:27.261784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.261821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.261858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.261902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.261937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.261973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.261993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.262956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.262991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.598 [2024-04-25 17:27:27.263487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:31.598 [2024-04-25 17:27:27.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.264966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.264981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.265011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.265027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.265049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.599 [2024-04-25 17:27:27.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:31.599 [2024-04-25 17:27:27.265087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.265971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.265986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.600 [2024-04-25 17:27:27.266690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:31.600 [2024-04-25 17:27:27.266747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.266766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.266792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.266808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.266841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.266858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.266884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.266899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.266928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.266942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.266969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:27.267281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:27.267309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.601 [2024-04-25 17:27:27.267325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.532998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.601 [2024-04-25 17:27:40.533613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.601 [2024-04-25 17:27:40.533626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.533975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:31.602 [2024-04-25 17:27:40.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.602 [2024-04-25 17:27:40.534866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.602 [2024-04-25 17:27:40.534881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.534895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.534910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.534923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.534949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.534963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.534978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.534992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.535964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.535977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.536000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.536014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.536030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.536043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.536058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.536102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.536116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.603 [2024-04-25 17:27:40.536130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.603 [2024-04-25 17:27:40.536144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.604 [2024-04-25 17:27:40.536842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16de4f0 is same with the state(5) to be set 00:24:31.604 [2024-04-25 17:27:40.536882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:31.604 [2024-04-25 17:27:40.536892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:31.604 [2024-04-25 17:27:40.536903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:24:31.604 [2024-04-25 17:27:40.536916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.604 [2024-04-25 17:27:40.536966] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16de4f0 was disconnected and freed. reset controller. 00:24:31.604 [2024-04-25 17:27:40.538265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.604 [2024-04-25 17:27:40.538349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187cd20 (9): Bad file descriptor 00:24:31.604 [2024-04-25 17:27:40.538476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.604 [2024-04-25 17:27:40.538533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.604 [2024-04-25 17:27:40.538556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x187cd20 with addr=10.0.0.2, port=4421 00:24:31.604 [2024-04-25 17:27:40.538571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187cd20 is same with the state(5) to be set 00:24:31.604 [2024-04-25 17:27:40.538594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187cd20 (9): Bad file descriptor 00:24:31.604 [2024-04-25 17:27:40.538616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.604 [2024-04-25 17:27:40.538630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:31.604 [2024-04-25 17:27:40.538644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.604 [2024-04-25 17:27:40.538667] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:31.604 [2024-04-25 17:27:40.538680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:31.604 [2024-04-25 17:27:50.622008] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:31.604 Received shutdown signal, test time was about 55.197309 seconds 00:24:31.604 00:24:31.604 Latency(us) 00:24:31.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.604 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.604 Verification LBA range: start 0x0 length 0x4000 00:24:31.604 Nvme0n1 : 55.20 8312.80 32.47 0.00 0.00 15370.31 1079.85 7015926.69 00:24:31.604 =================================================================================================================== 00:24:31.604 Total : 8312.80 32.47 0.00 0.00 15370.31 1079.85 7015926.69 00:24:31.604 17:28:00 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.604 17:28:01 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:31.604 17:28:01 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:31.604 17:28:01 -- host/multipath.sh@125 -- # nvmftestfini 00:24:31.604 17:28:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:31.604 17:28:01 -- nvmf/common.sh@117 -- # sync 00:24:31.604 17:28:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.605 17:28:01 -- nvmf/common.sh@120 -- # set +e 00:24:31.605 17:28:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.605 17:28:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.605 rmmod nvme_tcp 00:24:31.605 rmmod nvme_fabrics 00:24:31.605 rmmod nvme_keyring 00:24:31.605 17:28:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.605 17:28:01 -- nvmf/common.sh@124 -- # set -e 00:24:31.605 17:28:01 -- nvmf/common.sh@125 -- # return 0 00:24:31.605 17:28:01 -- nvmf/common.sh@478 -- # '[' -n 93882 ']' 00:24:31.605 17:28:01 -- nvmf/common.sh@479 -- # killprocess 93882 00:24:31.605 17:28:01 -- common/autotest_common.sh@936 -- # '[' -z 93882 ']' 00:24:31.605 17:28:01 -- common/autotest_common.sh@940 -- # kill -0 93882 00:24:31.605 17:28:01 -- common/autotest_common.sh@941 -- # uname 00:24:31.605 17:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:31.605 17:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93882 00:24:31.605 17:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:31.605 17:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:31.605 killing process with pid 93882 00:24:31.605 17:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93882' 00:24:31.605 17:28:01 -- common/autotest_common.sh@955 -- # kill 93882 00:24:31.605 17:28:01 -- common/autotest_common.sh@960 -- # wait 93882 00:24:31.605 17:28:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:31.605 17:28:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:31.605 17:28:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:31.605 17:28:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.605 17:28:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.605 17:28:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.605 17:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.605 17:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.605 17:28:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:31.605 00:24:31.605 real 1m0.133s 00:24:31.605 user 2m51.189s 00:24:31.605 sys 0m12.931s 00:24:31.605 17:28:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:31.605 ************************************ 00:24:31.605 END TEST nvmf_multipath 00:24:31.605 ************************************ 00:24:31.605 17:28:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.605 17:28:01 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:31.605 17:28:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:31.605 17:28:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:31.605 17:28:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.605 ************************************ 00:24:31.605 START TEST nvmf_timeout 00:24:31.605 ************************************ 00:24:31.605 17:28:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:31.862 * Looking for test storage... 00:24:31.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:31.862 17:28:01 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:31.862 17:28:01 -- nvmf/common.sh@7 -- # uname -s 00:24:31.862 17:28:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.862 17:28:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.862 17:28:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.862 17:28:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.862 17:28:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.862 17:28:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.862 17:28:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.862 17:28:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.862 17:28:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.862 17:28:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.862 17:28:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:24:31.862 17:28:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:24:31.862 17:28:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.862 17:28:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.862 17:28:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:31.862 17:28:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.862 17:28:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.862 17:28:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.862 17:28:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.862 17:28:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.862 17:28:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.862 17:28:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.862 17:28:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.862 17:28:01 -- paths/export.sh@5 -- # export PATH 00:24:31.862 17:28:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.862 17:28:01 -- nvmf/common.sh@47 -- # : 0 00:24:31.862 17:28:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.862 17:28:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.862 17:28:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.862 17:28:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.862 17:28:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.862 17:28:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.862 17:28:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.862 17:28:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.862 17:28:01 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.862 17:28:01 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.862 17:28:01 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.862 17:28:01 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:31.862 17:28:01 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.862 17:28:01 -- host/timeout.sh@19 -- # nvmftestinit 00:24:31.862 17:28:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:31.862 17:28:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.862 17:28:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:31.862 17:28:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:31.862 17:28:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:31.862 17:28:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.862 17:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.862 17:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.863 17:28:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:31.863 17:28:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:31.863 17:28:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:31.863 17:28:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:31.863 17:28:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:31.863 17:28:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:31.863 17:28:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.863 17:28:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.863 17:28:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:31.863 17:28:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:31.863 17:28:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:31.863 17:28:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:31.863 17:28:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:31.863 17:28:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.863 17:28:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:31.863 17:28:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:31.863 17:28:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:31.863 17:28:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:31.863 17:28:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:31.863 17:28:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:31.863 Cannot find device "nvmf_tgt_br" 00:24:31.863 17:28:01 -- nvmf/common.sh@155 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.863 Cannot find device "nvmf_tgt_br2" 00:24:31.863 17:28:01 -- nvmf/common.sh@156 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:31.863 17:28:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:31.863 Cannot find device "nvmf_tgt_br" 00:24:31.863 17:28:01 -- nvmf/common.sh@158 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:31.863 Cannot find device "nvmf_tgt_br2" 00:24:31.863 17:28:01 -- nvmf/common.sh@159 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:31.863 17:28:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:31.863 17:28:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.863 17:28:01 -- nvmf/common.sh@162 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.863 17:28:01 -- nvmf/common.sh@163 -- # true 00:24:31.863 17:28:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:31.863 17:28:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:31.863 17:28:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:31.863 17:28:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:31.863 17:28:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:31.863 17:28:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:32.121 17:28:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:32.121 17:28:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:32.121 17:28:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:32.121 17:28:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:32.121 17:28:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:32.121 17:28:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:32.121 17:28:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:32.121 17:28:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:32.121 17:28:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:32.121 17:28:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:32.121 17:28:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:32.121 17:28:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:32.121 17:28:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:32.121 17:28:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:32.121 17:28:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:32.121 17:28:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:32.121 17:28:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:32.121 17:28:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:32.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:24:32.121 00:24:32.121 --- 10.0.0.2 ping statistics --- 00:24:32.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.121 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:32.121 17:28:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:32.121 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:32.121 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:24:32.121 00:24:32.121 --- 10.0.0.3 ping statistics --- 00:24:32.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.121 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:32.121 17:28:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:32.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:32.121 00:24:32.121 --- 10.0.0.1 ping statistics --- 00:24:32.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.121 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:32.121 17:28:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.121 17:28:01 -- nvmf/common.sh@422 -- # return 0 00:24:32.121 17:28:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:32.121 17:28:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.121 17:28:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:32.121 17:28:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:32.121 17:28:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.121 17:28:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:32.121 17:28:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:32.121 17:28:02 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:32.121 17:28:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:32.121 17:28:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:32.121 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.121 17:28:02 -- nvmf/common.sh@470 -- # nvmfpid=95236 00:24:32.121 17:28:02 -- nvmf/common.sh@471 -- # waitforlisten 95236 00:24:32.121 17:28:02 -- common/autotest_common.sh@817 -- # '[' -z 95236 ']' 00:24:32.121 17:28:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.121 17:28:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.121 17:28:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:32.121 17:28:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.121 17:28:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.121 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.121 [2024-04-25 17:28:02.074491] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:24:32.121 [2024-04-25 17:28:02.074576] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.380 [2024-04-25 17:28:02.213244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:32.380 [2024-04-25 17:28:02.260886] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.380 [2024-04-25 17:28:02.260959] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.380 [2024-04-25 17:28:02.260984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.380 [2024-04-25 17:28:02.260992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.380 [2024-04-25 17:28:02.260998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.380 [2024-04-25 17:28:02.261839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.380 [2024-04-25 17:28:02.261850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.380 17:28:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:32.380 17:28:02 -- common/autotest_common.sh@850 -- # return 0 00:24:32.380 17:28:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:32.380 17:28:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.380 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.637 17:28:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.637 17:28:02 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.637 17:28:02 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:32.637 [2024-04-25 17:28:02.563399] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.637 17:28:02 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:32.907 Malloc0 00:24:33.177 17:28:02 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.177 17:28:03 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.434 17:28:03 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.692 [2024-04-25 17:28:03.541136] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.692 17:28:03 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:33.692 17:28:03 -- host/timeout.sh@32 -- # bdevperf_pid=95309 00:24:33.692 17:28:03 -- host/timeout.sh@34 -- # waitforlisten 95309 /var/tmp/bdevperf.sock 00:24:33.692 17:28:03 -- common/autotest_common.sh@817 -- # '[' -z 95309 ']' 00:24:33.692 17:28:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.692 17:28:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:33.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.692 17:28:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.692 17:28:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:33.692 17:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.692 [2024-04-25 17:28:03.598231] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:24:33.692 [2024-04-25 17:28:03.598309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95309 ] 00:24:33.950 [2024-04-25 17:28:03.732428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.950 [2024-04-25 17:28:03.800928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.514 17:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:34.515 17:28:04 -- common/autotest_common.sh@850 -- # return 0 00:24:34.515 17:28:04 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:34.772 17:28:04 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:35.030 NVMe0n1 00:24:35.030 17:28:04 -- host/timeout.sh@51 -- # rpc_pid=95351 00:24:35.030 17:28:04 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.030 17:28:04 -- host/timeout.sh@53 -- # sleep 1 00:24:35.289 Running I/O for 10 seconds... 00:24:36.223 17:28:05 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.484 [2024-04-25 17:28:06.203405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.484 [2024-04-25 17:28:06.203868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.484 [2024-04-25 17:28:06.203878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.203899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.203921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.203941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.203962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.203984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.203995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.485 [2024-04-25 17:28:06.204521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.485 [2024-04-25 17:28:06.204807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.485 [2024-04-25 17:28:06.204816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.204980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.204989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.486 [2024-04-25 17:28:06.205123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.486 [2024-04-25 17:28:06.205638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.486 [2024-04-25 17:28:06.205647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.205960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.205986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.205998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.487 [2024-04-25 17:28:06.206211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.487 [2024-04-25 17:28:06.206349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x107f3a0 is same with the state(5) to be set 00:24:36.487 [2024-04-25 17:28:06.206370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.487 [2024-04-25 17:28:06.206378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.487 [2024-04-25 17:28:06.206386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91008 len:8 PRP1 0x0 PRP2 0x0 00:24:36.487 [2024-04-25 17:28:06.206396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.487 [2024-04-25 17:28:06.206436] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x107f3a0 was disconnected and freed. reset controller. 00:24:36.487 [2024-04-25 17:28:06.206671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.487 [2024-04-25 17:28:06.206785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016630 (9): Bad file descriptor 00:24:36.487 [2024-04-25 17:28:06.206886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.487 [2024-04-25 17:28:06.206934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.487 [2024-04-25 17:28:06.206957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1016630 with addr=10.0.0.2, port=4420 00:24:36.487 [2024-04-25 17:28:06.206968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016630 is same with the state(5) to be set 00:24:36.487 [2024-04-25 17:28:06.206987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016630 (9): Bad file descriptor 00:24:36.487 [2024-04-25 17:28:06.207003] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:36.487 [2024-04-25 17:28:06.207013] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:36.487 [2024-04-25 17:28:06.207024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.487 [2024-04-25 17:28:06.207059] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:36.487 [2024-04-25 17:28:06.207069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.487 17:28:06 -- host/timeout.sh@56 -- # sleep 2 00:24:38.387 [2024-04-25 17:28:08.207168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-04-25 17:28:08.207256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.387 [2024-04-25 17:28:08.207275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1016630 with addr=10.0.0.2, port=4420 00:24:38.387 [2024-04-25 17:28:08.207287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016630 is same with the state(5) to be set 00:24:38.387 [2024-04-25 17:28:08.207309] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016630 (9): Bad file descriptor 00:24:38.387 [2024-04-25 17:28:08.207337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.387 [2024-04-25 17:28:08.207347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.387 [2024-04-25 17:28:08.207357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.387 [2024-04-25 17:28:08.207380] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.387 [2024-04-25 17:28:08.207391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.387 17:28:08 -- host/timeout.sh@57 -- # get_controller 00:24:38.387 17:28:08 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.387 17:28:08 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:38.646 17:28:08 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:38.646 17:28:08 -- host/timeout.sh@58 -- # get_bdev 00:24:38.646 17:28:08 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:38.646 17:28:08 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:38.904 17:28:08 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:38.904 17:28:08 -- host/timeout.sh@61 -- # sleep 5 00:24:40.280 [2024-04-25 17:28:10.207484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.280 [2024-04-25 17:28:10.207576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.280 [2024-04-25 17:28:10.207593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1016630 with addr=10.0.0.2, port=4420 00:24:40.280 [2024-04-25 17:28:10.207606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1016630 is same with the state(5) to be set 00:24:40.280 [2024-04-25 17:28:10.207629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1016630 (9): Bad file descriptor 00:24:40.280 [2024-04-25 17:28:10.207646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.280 [2024-04-25 17:28:10.207654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.280 [2024-04-25 17:28:10.207664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.280 [2024-04-25 17:28:10.207688] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.280 [2024-04-25 17:28:10.207698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.811 [2024-04-25 17:28:12.207771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.377 00:24:43.377 Latency(us) 00:24:43.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.377 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.377 Verification LBA range: start 0x0 length 0x4000 00:24:43.377 NVMe0n1 : 8.14 1389.50 5.43 15.73 0.00 90969.86 1809.69 7015926.69 00:24:43.377 =================================================================================================================== 00:24:43.377 Total : 1389.50 5.43 15.73 0.00 90969.86 1809.69 7015926.69 00:24:43.377 0 00:24:43.943 17:28:13 -- host/timeout.sh@62 -- # get_controller 00:24:43.943 17:28:13 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.943 17:28:13 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:44.202 17:28:14 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:44.202 17:28:14 -- host/timeout.sh@63 -- # get_bdev 00:24:44.202 17:28:14 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:44.202 17:28:14 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:44.460 17:28:14 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:44.460 17:28:14 -- host/timeout.sh@65 -- # wait 95351 00:24:44.460 17:28:14 -- host/timeout.sh@67 -- # killprocess 95309 00:24:44.460 17:28:14 -- common/autotest_common.sh@936 -- # '[' -z 95309 ']' 00:24:44.460 17:28:14 -- common/autotest_common.sh@940 -- # kill -0 95309 00:24:44.460 17:28:14 -- common/autotest_common.sh@941 -- # uname 00:24:44.460 17:28:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.460 17:28:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95309 00:24:44.460 17:28:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:44.461 17:28:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:44.461 killing process with pid 95309 00:24:44.461 17:28:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95309' 00:24:44.461 Received shutdown signal, test time was about 9.187494 seconds 00:24:44.461 00:24:44.461 Latency(us) 00:24:44.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.461 =================================================================================================================== 00:24:44.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.461 17:28:14 -- common/autotest_common.sh@955 -- # kill 95309 00:24:44.461 17:28:14 -- common/autotest_common.sh@960 -- # wait 95309 00:24:44.461 17:28:14 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.719 [2024-04-25 17:28:14.664857] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.719 17:28:14 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:44.719 17:28:14 -- host/timeout.sh@74 -- # bdevperf_pid=95509 00:24:44.719 17:28:14 -- host/timeout.sh@76 -- # waitforlisten 95509 /var/tmp/bdevperf.sock 00:24:44.719 17:28:14 -- common/autotest_common.sh@817 -- # '[' -z 95509 ']' 00:24:44.719 17:28:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.719 17:28:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.719 17:28:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.719 17:28:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.719 17:28:14 -- common/autotest_common.sh@10 -- # set +x 00:24:44.978 [2024-04-25 17:28:14.723829] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:24:44.978 [2024-04-25 17:28:14.723911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95509 ] 00:24:44.978 [2024-04-25 17:28:14.853021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.978 [2024-04-25 17:28:14.906846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.913 17:28:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.913 17:28:15 -- common/autotest_common.sh@850 -- # return 0 00:24:45.913 17:28:15 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:45.913 17:28:15 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:46.171 NVMe0n1 00:24:46.171 17:28:16 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.171 17:28:16 -- host/timeout.sh@84 -- # rpc_pid=95551 00:24:46.171 17:28:16 -- host/timeout.sh@86 -- # sleep 1 00:24:46.429 Running I/O for 10 seconds... 00:24:47.369 17:28:17 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.369 [2024-04-25 17:28:17.316271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.369 [2024-04-25 17:28:17.316575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.316992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.370 [2024-04-25 17:28:17.317343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110acf0 is same with the state(5) to be set 00:24:47.371 [2024-04-25 17:28:17.317905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.317948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.317973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.317984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.317996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.371 [2024-04-25 17:28:17.318620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.371 [2024-04-25 17:28:17.318631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.318982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.318993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.372 [2024-04-25 17:28:17.319484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.372 [2024-04-25 17:28:17.319504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.372 [2024-04-25 17:28:17.319514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.319823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.319988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.319997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.373 [2024-04-25 17:28:17.320198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.373 [2024-04-25 17:28:17.320403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.373 [2024-04-25 17:28:17.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.374 [2024-04-25 17:28:17.320763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.374 [2024-04-25 17:28:17.320816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.374 [2024-04-25 17:28:17.320825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90584 len:8 PRP1 0x0 PRP2 0x0 00:24:47.374 [2024-04-25 17:28:17.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.374 [2024-04-25 17:28:17.320878] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac73a0 was disconnected and freed. reset controller. 00:24:47.374 [2024-04-25 17:28:17.321133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.374 [2024-04-25 17:28:17.321231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:47.374 [2024-04-25 17:28:17.321326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.374 [2024-04-25 17:28:17.321373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.374 [2024-04-25 17:28:17.321389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:24:47.374 [2024-04-25 17:28:17.321399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:24:47.374 [2024-04-25 17:28:17.321416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:47.374 [2024-04-25 17:28:17.321431] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.374 [2024-04-25 17:28:17.321440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.374 [2024-04-25 17:28:17.321449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.374 [2024-04-25 17:28:17.321469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.374 [2024-04-25 17:28:17.321478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.632 17:28:17 -- host/timeout.sh@90 -- # sleep 1 00:24:48.567 [2024-04-25 17:28:18.321559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.568 [2024-04-25 17:28:18.321644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.568 [2024-04-25 17:28:18.321661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:24:48.568 [2024-04-25 17:28:18.321672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:24:48.568 [2024-04-25 17:28:18.321691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:48.568 [2024-04-25 17:28:18.321720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.568 [2024-04-25 17:28:18.321730] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.568 [2024-04-25 17:28:18.321739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.568 [2024-04-25 17:28:18.321776] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.568 [2024-04-25 17:28:18.321787] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:48.568 17:28:18 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.826 [2024-04-25 17:28:18.591431] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.826 17:28:18 -- host/timeout.sh@92 -- # wait 95551 00:24:49.393 [2024-04-25 17:28:19.338847] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.547 00:24:57.547 Latency(us) 00:24:57.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.547 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:57.547 Verification LBA range: start 0x0 length 0x4000 00:24:57.547 NVMe0n1 : 10.01 7248.05 28.31 0.00 0.00 17622.94 1779.90 3019898.88 00:24:57.547 =================================================================================================================== 00:24:57.547 Total : 7248.05 28.31 0.00 0.00 17622.94 1779.90 3019898.88 00:24:57.547 0 00:24:57.547 17:28:26 -- host/timeout.sh@97 -- # rpc_pid=95668 00:24:57.547 17:28:26 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:57.547 17:28:26 -- host/timeout.sh@98 -- # sleep 1 00:24:57.547 Running I/O for 10 seconds... 00:24:57.547 17:28:27 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.547 [2024-04-25 17:28:27.462495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.463960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.464028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.464104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.547 [2024-04-25 17:28:27.464176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.464844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf63b30 is same with the state(5) to be set 00:24:57.548 [2024-04-25 17:28:27.465170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.548 [2024-04-25 17:28:27.465791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.548 [2024-04-25 17:28:27.465815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.548 [2024-04-25 17:28:27.465836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.548 [2024-04-25 17:28:27.465856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.548 [2024-04-25 17:28:27.465876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.548 [2024-04-25 17:28:27.465897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.548 [2024-04-25 17:28:27.465908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.465918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.465929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.465950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.465980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.465989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.549 [2024-04-25 17:28:27.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.549 [2024-04-25 17:28:27.466543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.466700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.466985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.466994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.467014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.550 [2024-04-25 17:28:27.467035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.550 [2024-04-25 17:28:27.467205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.550 [2024-04-25 17:28:27.467213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.551 [2024-04-25 17:28:27.467667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.551 [2024-04-25 17:28:27.467852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.551 [2024-04-25 17:28:27.467862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5100 is same with the state(5) to be set 00:24:57.551 [2024-04-25 17:28:27.467875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.551 [2024-04-25 17:28:27.467883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.552 [2024-04-25 17:28:27.467891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0 00:24:57.552 [2024-04-25 17:28:27.467901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.552 [2024-04-25 17:28:27.467955] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac5100 was disconnected and freed. reset controller. 00:24:57.552 [2024-04-25 17:28:27.468257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 [2024-04-25 17:28:27.468384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:57.552 [2024-04-25 17:28:27.468525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-04-25 17:28:27.468578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.552 [2024-04-25 17:28:27.468595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:24:57.552 [2024-04-25 17:28:27.468633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:24:57.552 [2024-04-25 17:28:27.468653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:57.552 [2024-04-25 17:28:27.468680] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.552 [2024-04-25 17:28:27.468695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.552 [2024-04-25 17:28:27.468745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.552 [2024-04-25 17:28:27.468769] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.552 [2024-04-25 17:28:27.468780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.552 17:28:27 -- host/timeout.sh@101 -- # sleep 3 00:24:58.927 [2024-04-25 17:28:28.468889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.927 [2024-04-25 17:28:28.468999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.927 [2024-04-25 17:28:28.469017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:24:58.927 [2024-04-25 17:28:28.469029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:24:58.927 [2024-04-25 17:28:28.469052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:58.927 [2024-04-25 17:28:28.469099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:58.927 [2024-04-25 17:28:28.469108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:58.927 [2024-04-25 17:28:28.469117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:58.927 [2024-04-25 17:28:28.469140] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.927 [2024-04-25 17:28:28.469150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.494 [2024-04-25 17:28:29.469227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-04-25 17:28:29.469316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-04-25 17:28:29.469333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:24:59.494 [2024-04-25 17:28:29.469345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:24:59.494 [2024-04-25 17:28:29.469365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:24:59.494 [2024-04-25 17:28:29.469398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.494 [2024-04-25 17:28:29.469423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.494 [2024-04-25 17:28:29.469434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.494 [2024-04-25 17:28:29.469457] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.494 [2024-04-25 17:28:29.469468] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.868 [2024-04-25 17:28:30.472136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.868 [2024-04-25 17:28:30.472248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.868 [2024-04-25 17:28:30.472266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e630 with addr=10.0.0.2, port=4420 00:25:00.868 [2024-04-25 17:28:30.472301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e630 is same with the state(5) to be set 00:25:00.868 [2024-04-25 17:28:30.472588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e630 (9): Bad file descriptor 00:25:00.868 [2024-04-25 17:28:30.472865] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.868 [2024-04-25 17:28:30.472888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.868 [2024-04-25 17:28:30.472900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.868 [2024-04-25 17:28:30.476447] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-04-25 17:28:30.476477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 17:28:30 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.869 [2024-04-25 17:28:30.679556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.869 17:28:30 -- host/timeout.sh@103 -- # wait 95668 00:25:01.803 [2024-04-25 17:28:31.513972] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.071 00:25:07.071 Latency(us) 00:25:07.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.071 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:07.071 Verification LBA range: start 0x0 length 0x4000 00:25:07.071 NVMe0n1 : 10.01 6051.49 23.64 4190.87 0.00 12472.48 614.40 3019898.88 00:25:07.071 =================================================================================================================== 00:25:07.071 Total : 6051.49 23.64 4190.87 0.00 12472.48 0.00 3019898.88 00:25:07.071 0 00:25:07.071 17:28:36 -- host/timeout.sh@105 -- # killprocess 95509 00:25:07.071 17:28:36 -- common/autotest_common.sh@936 -- # '[' -z 95509 ']' 00:25:07.071 17:28:36 -- common/autotest_common.sh@940 -- # kill -0 95509 00:25:07.071 17:28:36 -- common/autotest_common.sh@941 -- # uname 00:25:07.071 17:28:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:07.071 17:28:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95509 00:25:07.071 killing process with pid 95509 00:25:07.071 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.071 00:25:07.071 Latency(us) 00:25:07.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.071 =================================================================================================================== 00:25:07.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.071 17:28:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:07.071 17:28:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:07.071 17:28:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95509' 00:25:07.071 17:28:36 -- common/autotest_common.sh@955 -- # kill 95509 00:25:07.071 17:28:36 -- common/autotest_common.sh@960 -- # wait 95509 00:25:07.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.071 17:28:36 -- host/timeout.sh@110 -- # bdevperf_pid=95789 00:25:07.071 17:28:36 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:07.071 17:28:36 -- host/timeout.sh@112 -- # waitforlisten 95789 /var/tmp/bdevperf.sock 00:25:07.071 17:28:36 -- common/autotest_common.sh@817 -- # '[' -z 95789 ']' 00:25:07.071 17:28:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.071 17:28:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:07.071 17:28:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.071 17:28:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:07.071 17:28:36 -- common/autotest_common.sh@10 -- # set +x 00:25:07.071 [2024-04-25 17:28:36.616201] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:07.071 [2024-04-25 17:28:36.616501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95789 ] 00:25:07.071 [2024-04-25 17:28:36.751766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.071 [2024-04-25 17:28:36.802475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.071 17:28:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:07.071 17:28:36 -- common/autotest_common.sh@850 -- # return 0 00:25:07.071 17:28:36 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95789 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:07.071 17:28:36 -- host/timeout.sh@116 -- # dtrace_pid=95804 00:25:07.071 17:28:36 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:07.330 17:28:37 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:07.589 NVMe0n1 00:25:07.589 17:28:37 -- host/timeout.sh@124 -- # rpc_pid=95857 00:25:07.589 17:28:37 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.589 17:28:37 -- host/timeout.sh@125 -- # sleep 1 00:25:07.589 Running I/O for 10 seconds... 00:25:08.524 17:28:38 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.785 [2024-04-25 17:28:38.696422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696659] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.785 [2024-04-25 17:28:38.696968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.696975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.696984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.696992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697359] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.786 [2024-04-25 17:28:38.697553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.697561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.697569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.697577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.697585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.697593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf672d0 is same with the state(5) to be set 00:25:08.787 [2024-04-25 17:28:38.698069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.787 [2024-04-25 17:28:38.698836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.787 [2024-04-25 17:28:38.698848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.698983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.698995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.788 [2024-04-25 17:28:38.699530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.788 [2024-04-25 17:28:38.699542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.699989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.699999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.789 [2024-04-25 17:28:38.700179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.789 [2024-04-25 17:28:38.700216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5552 len:8 PRP1 0x0 PRP2 0x0 00:25:08.789 [2024-04-25 17:28:38.700225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.789 [2024-04-25 17:28:38.700248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.789 [2024-04-25 17:28:38.700256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48952 len:8 PRP1 0x0 PRP2 0x0 00:25:08.789 [2024-04-25 17:28:38.700264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.789 [2024-04-25 17:28:38.700310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.789 [2024-04-25 17:28:38.700318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:25:08.789 [2024-04-25 17:28:38.700327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.789 [2024-04-25 17:28:38.700351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.789 [2024-04-25 17:28:38.700359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92560 len:8 PRP1 0x0 PRP2 0x0 00:25:08.789 [2024-04-25 17:28:38.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.789 [2024-04-25 17:28:38.700378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73824 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121848 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11272 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4712 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62952 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81376 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54280 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119944 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53472 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17928 len:8 PRP1 0x0 PRP2 0x0 00:25:08.790 [2024-04-25 17:28:38.700853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.790 [2024-04-25 17:28:38.700865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.790 [2024-04-25 17:28:38.700872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.790 [2024-04-25 17:28:38.700881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.700890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.700899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.700906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.700914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.700923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.700932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.700939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.700947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.700956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.700965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.700972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.700980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13488 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.700989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.700999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.701006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.701014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.701039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.701048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.701055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.701063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55672 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.701071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.701080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.701087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52952 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38448 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10264 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5224 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109424 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130664 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89168 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51184 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3720 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109048 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:08.791 [2024-04-25 17:28:38.715825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:08.791 [2024-04-25 17:28:38.715833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61984 len:8 PRP1 0x0 PRP2 0x0 00:25:08.791 [2024-04-25 17:28:38.715842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.715888] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd013a0 was disconnected and freed. reset controller. 00:25:08.791 [2024-04-25 17:28:38.716003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.791 [2024-04-25 17:28:38.716032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.716045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.791 [2024-04-25 17:28:38.716055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.791 [2024-04-25 17:28:38.716065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.791 [2024-04-25 17:28:38.716074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.792 [2024-04-25 17:28:38.716084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.792 [2024-04-25 17:28:38.716108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.792 [2024-04-25 17:28:38.716118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc98630 is same with the state(5) to be set 00:25:08.792 [2024-04-25 17:28:38.716391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.792 [2024-04-25 17:28:38.716422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc98630 (9): Bad file descriptor 00:25:08.792 [2024-04-25 17:28:38.716527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.792 [2024-04-25 17:28:38.716579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.792 [2024-04-25 17:28:38.716597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc98630 with addr=10.0.0.2, port=4420 00:25:08.792 [2024-04-25 17:28:38.716608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc98630 is same with the state(5) to be set 00:25:08.792 [2024-04-25 17:28:38.716626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc98630 (9): Bad file descriptor 00:25:08.792 [2024-04-25 17:28:38.716642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.792 [2024-04-25 17:28:38.716652] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:08.792 [2024-04-25 17:28:38.716662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.792 [2024-04-25 17:28:38.716681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.792 [2024-04-25 17:28:38.716692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.792 17:28:38 -- host/timeout.sh@128 -- # wait 95857 00:25:11.320 [2024-04-25 17:28:40.716921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.320 [2024-04-25 17:28:40.717017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.320 [2024-04-25 17:28:40.717036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc98630 with addr=10.0.0.2, port=4420 00:25:11.320 [2024-04-25 17:28:40.717048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc98630 is same with the state(5) to be set 00:25:11.320 [2024-04-25 17:28:40.717086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc98630 (9): Bad file descriptor 00:25:11.320 [2024-04-25 17:28:40.717103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:11.320 [2024-04-25 17:28:40.717111] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:11.320 [2024-04-25 17:28:40.717121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.320 [2024-04-25 17:28:40.717144] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:11.320 [2024-04-25 17:28:40.717154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.216 [2024-04-25 17:28:42.717274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.216 [2024-04-25 17:28:42.717378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.216 [2024-04-25 17:28:42.717396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc98630 with addr=10.0.0.2, port=4420 00:25:13.216 [2024-04-25 17:28:42.717408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc98630 is same with the state(5) to be set 00:25:13.216 [2024-04-25 17:28:42.717436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc98630 (9): Bad file descriptor 00:25:13.216 [2024-04-25 17:28:42.717462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.217 [2024-04-25 17:28:42.717473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.217 [2024-04-25 17:28:42.717482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.217 [2024-04-25 17:28:42.717505] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.217 [2024-04-25 17:28:42.717515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.114 [2024-04-25 17:28:44.717577] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.050 00:25:16.050 Latency(us) 00:25:16.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.050 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:16.050 NVMe0n1 : 8.16 2778.35 10.85 15.68 0.00 45861.30 2025.66 7046430.72 00:25:16.050 =================================================================================================================== 00:25:16.050 Total : 2778.35 10.85 15.68 0.00 45861.30 2025.66 7046430.72 00:25:16.050 0 00:25:16.050 17:28:45 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:16.050 Attaching 5 probes... 00:25:16.050 1342.099884: reset bdev controller NVMe0 00:25:16.050 1342.176128: reconnect bdev controller NVMe0 00:25:16.050 3342.532903: reconnect delay bdev controller NVMe0 00:25:16.050 3342.551822: reconnect bdev controller NVMe0 00:25:16.050 5342.891846: reconnect delay bdev controller NVMe0 00:25:16.050 5342.907229: reconnect bdev controller NVMe0 00:25:16.050 7343.250598: reconnect delay bdev controller NVMe0 00:25:16.050 7343.266001: reconnect bdev controller NVMe0 00:25:16.050 17:28:45 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:16.050 17:28:45 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:16.050 17:28:45 -- host/timeout.sh@136 -- # kill 95804 00:25:16.050 17:28:45 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:16.050 17:28:45 -- host/timeout.sh@139 -- # killprocess 95789 00:25:16.050 17:28:45 -- common/autotest_common.sh@936 -- # '[' -z 95789 ']' 00:25:16.050 17:28:45 -- common/autotest_common.sh@940 -- # kill -0 95789 00:25:16.050 17:28:45 -- common/autotest_common.sh@941 -- # uname 00:25:16.050 17:28:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.050 17:28:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95789 00:25:16.050 killing process with pid 95789 00:25:16.050 Received shutdown signal, test time was about 8.219475 seconds 00:25:16.050 00:25:16.050 Latency(us) 00:25:16.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.050 =================================================================================================================== 00:25:16.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.050 17:28:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:16.050 17:28:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:16.050 17:28:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95789' 00:25:16.050 17:28:45 -- common/autotest_common.sh@955 -- # kill 95789 00:25:16.050 17:28:45 -- common/autotest_common.sh@960 -- # wait 95789 00:25:16.050 17:28:45 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.308 17:28:46 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:16.308 17:28:46 -- host/timeout.sh@145 -- # nvmftestfini 00:25:16.308 17:28:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:16.308 17:28:46 -- nvmf/common.sh@117 -- # sync 00:25:16.308 17:28:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.308 17:28:46 -- nvmf/common.sh@120 -- # set +e 00:25:16.308 17:28:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.308 17:28:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.308 rmmod nvme_tcp 00:25:16.308 rmmod nvme_fabrics 00:25:16.567 rmmod nvme_keyring 00:25:16.567 17:28:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.567 17:28:46 -- nvmf/common.sh@124 -- # set -e 00:25:16.567 17:28:46 -- nvmf/common.sh@125 -- # return 0 00:25:16.567 17:28:46 -- nvmf/common.sh@478 -- # '[' -n 95236 ']' 00:25:16.567 17:28:46 -- nvmf/common.sh@479 -- # killprocess 95236 00:25:16.567 17:28:46 -- common/autotest_common.sh@936 -- # '[' -z 95236 ']' 00:25:16.567 17:28:46 -- common/autotest_common.sh@940 -- # kill -0 95236 00:25:16.567 17:28:46 -- common/autotest_common.sh@941 -- # uname 00:25:16.567 17:28:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.567 17:28:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95236 00:25:16.567 killing process with pid 95236 00:25:16.567 17:28:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:16.567 17:28:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:16.567 17:28:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95236' 00:25:16.567 17:28:46 -- common/autotest_common.sh@955 -- # kill 95236 00:25:16.567 17:28:46 -- common/autotest_common.sh@960 -- # wait 95236 00:25:16.567 17:28:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:16.567 17:28:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:16.567 17:28:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:16.567 17:28:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.567 17:28:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.567 17:28:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.567 17:28:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.567 17:28:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.826 17:28:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:16.826 00:25:16.826 real 0m45.037s 00:25:16.826 user 2m12.871s 00:25:16.826 sys 0m4.465s 00:25:16.826 17:28:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:16.826 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.826 ************************************ 00:25:16.826 END TEST nvmf_timeout 00:25:16.826 ************************************ 00:25:16.826 17:28:46 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:25:16.826 17:28:46 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:25:16.826 17:28:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:16.826 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.826 17:28:46 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:25:16.826 00:25:16.826 real 18m36.695s 00:25:16.826 user 57m32.665s 00:25:16.826 sys 3m55.305s 00:25:16.826 17:28:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:16.826 ************************************ 00:25:16.826 END TEST nvmf_tcp 00:25:16.826 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.826 ************************************ 00:25:16.826 17:28:46 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:25:16.826 17:28:46 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:16.826 17:28:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:16.826 17:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.826 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.826 ************************************ 00:25:16.826 START TEST spdkcli_nvmf_tcp 00:25:16.826 ************************************ 00:25:16.826 17:28:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:17.085 * Looking for test storage... 00:25:17.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:17.085 17:28:46 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:17.085 17:28:46 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:17.085 17:28:46 -- nvmf/common.sh@7 -- # uname -s 00:25:17.085 17:28:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.085 17:28:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.085 17:28:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.085 17:28:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.085 17:28:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.085 17:28:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.085 17:28:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.085 17:28:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.085 17:28:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.085 17:28:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.085 17:28:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:17.085 17:28:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:17.085 17:28:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.085 17:28:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.085 17:28:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:17.085 17:28:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.085 17:28:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:17.085 17:28:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.085 17:28:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.085 17:28:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.085 17:28:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.085 17:28:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.085 17:28:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.085 17:28:46 -- paths/export.sh@5 -- # export PATH 00:25:17.085 17:28:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.085 17:28:46 -- nvmf/common.sh@47 -- # : 0 00:25:17.085 17:28:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.085 17:28:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.085 17:28:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.085 17:28:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.085 17:28:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.085 17:28:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.085 17:28:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.085 17:28:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:17.085 17:28:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:17.085 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 17:28:46 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:17.085 17:28:46 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96082 00:25:17.085 17:28:46 -- spdkcli/common.sh@34 -- # waitforlisten 96082 00:25:17.085 17:28:46 -- common/autotest_common.sh@817 -- # '[' -z 96082 ']' 00:25:17.085 17:28:46 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:17.085 17:28:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.085 17:28:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.085 17:28:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.085 17:28:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.085 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:25:17.085 [2024-04-25 17:28:46.916815] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:17.085 [2024-04-25 17:28:46.916901] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96082 ] 00:25:17.085 [2024-04-25 17:28:47.043669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:17.344 [2024-04-25 17:28:47.096432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.344 [2024-04-25 17:28:47.096443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.344 17:28:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:17.344 17:28:47 -- common/autotest_common.sh@850 -- # return 0 00:25:17.344 17:28:47 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:17.344 17:28:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:17.344 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:25:17.344 17:28:47 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:17.344 17:28:47 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:17.344 17:28:47 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:17.344 17:28:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:17.344 17:28:47 -- common/autotest_common.sh@10 -- # set +x 00:25:17.344 17:28:47 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:17.344 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:17.344 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:17.344 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:17.344 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:17.344 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:17.344 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:17.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:17.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:17.344 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:17.344 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:17.344 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:17.344 ' 00:25:17.911 [2024-04-25 17:28:47.659884] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:20.463 [2024-04-25 17:28:49.852166] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.410 [2024-04-25 17:28:51.121271] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:23.950 [2024-04-25 17:28:53.426685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:25.852 [2024-04-25 17:28:55.431937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:27.227 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:27.227 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:27.227 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:27.227 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:27.227 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:27.227 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:27.227 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:27.227 17:28:57 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:27.227 17:28:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.227 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.227 17:28:57 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:27.227 17:28:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:27.227 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.227 17:28:57 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:27.227 17:28:57 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:27.794 17:28:57 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:27.794 17:28:57 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:27.794 17:28:57 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:27.794 17:28:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.794 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.794 17:28:57 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:27.794 17:28:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:27.794 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.794 17:28:57 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:27.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:27.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:27.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:27.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:27.794 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:27.794 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:27.794 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:27.794 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:27.794 ' 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:33.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:33.060 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:33.060 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:33.060 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:33.060 17:29:02 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:33.060 17:29:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:33.060 17:29:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.060 17:29:02 -- spdkcli/nvmf.sh@90 -- # killprocess 96082 00:25:33.060 17:29:02 -- common/autotest_common.sh@936 -- # '[' -z 96082 ']' 00:25:33.060 17:29:02 -- common/autotest_common.sh@940 -- # kill -0 96082 00:25:33.060 17:29:02 -- common/autotest_common.sh@941 -- # uname 00:25:33.060 17:29:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:33.060 17:29:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96082 00:25:33.060 17:29:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:33.060 17:29:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:33.060 17:29:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96082' 00:25:33.060 killing process with pid 96082 00:25:33.060 17:29:02 -- common/autotest_common.sh@955 -- # kill 96082 00:25:33.060 [2024-04-25 17:29:02.905816] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:33.060 17:29:02 -- common/autotest_common.sh@960 -- # wait 96082 00:25:33.319 17:29:03 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:33.319 17:29:03 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:33.319 17:29:03 -- spdkcli/common.sh@13 -- # '[' -n 96082 ']' 00:25:33.319 17:29:03 -- spdkcli/common.sh@14 -- # killprocess 96082 00:25:33.319 17:29:03 -- common/autotest_common.sh@936 -- # '[' -z 96082 ']' 00:25:33.319 17:29:03 -- common/autotest_common.sh@940 -- # kill -0 96082 00:25:33.319 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (96082) - No such process 00:25:33.319 Process with pid 96082 is not found 00:25:33.319 17:29:03 -- common/autotest_common.sh@963 -- # echo 'Process with pid 96082 is not found' 00:25:33.319 17:29:03 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:33.319 17:29:03 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:33.319 17:29:03 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:33.319 00:25:33.319 real 0m16.321s 00:25:33.319 user 0m34.929s 00:25:33.319 sys 0m0.803s 00:25:33.319 17:29:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:33.319 ************************************ 00:25:33.319 END TEST spdkcli_nvmf_tcp 00:25:33.319 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:25:33.319 ************************************ 00:25:33.319 17:29:03 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:33.319 17:29:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:33.319 17:29:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.319 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:25:33.319 ************************************ 00:25:33.319 START TEST nvmf_identify_passthru 00:25:33.319 ************************************ 00:25:33.319 17:29:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:33.319 * Looking for test storage... 00:25:33.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:33.319 17:29:03 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.319 17:29:03 -- nvmf/common.sh@7 -- # uname -s 00:25:33.319 17:29:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.319 17:29:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.319 17:29:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.319 17:29:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.319 17:29:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.319 17:29:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.319 17:29:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.319 17:29:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.319 17:29:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.319 17:29:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.319 17:29:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:33.319 17:29:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:33.320 17:29:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.320 17:29:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.320 17:29:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.320 17:29:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.320 17:29:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.320 17:29:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.320 17:29:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.320 17:29:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.320 17:29:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@5 -- # export PATH 00:25:33.320 17:29:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- nvmf/common.sh@47 -- # : 0 00:25:33.320 17:29:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.320 17:29:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.320 17:29:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.320 17:29:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.320 17:29:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.320 17:29:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.320 17:29:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.320 17:29:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.320 17:29:03 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.320 17:29:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.320 17:29:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.320 17:29:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.320 17:29:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- paths/export.sh@5 -- # export PATH 00:25:33.320 17:29:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.320 17:29:03 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:33.320 17:29:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:33.320 17:29:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.320 17:29:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:33.320 17:29:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:33.320 17:29:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:33.320 17:29:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.320 17:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:33.320 17:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.320 17:29:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:33.320 17:29:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:33.320 17:29:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:33.320 17:29:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:33.320 17:29:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:33.320 17:29:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:33.320 17:29:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.320 17:29:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.320 17:29:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:33.320 17:29:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:33.320 17:29:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.320 17:29:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.320 17:29:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.320 17:29:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.320 17:29:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.320 17:29:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.320 17:29:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.320 17:29:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.320 17:29:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:33.578 17:29:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:33.578 Cannot find device "nvmf_tgt_br" 00:25:33.578 17:29:03 -- nvmf/common.sh@155 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.578 Cannot find device "nvmf_tgt_br2" 00:25:33.578 17:29:03 -- nvmf/common.sh@156 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:33.578 17:29:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:33.578 Cannot find device "nvmf_tgt_br" 00:25:33.578 17:29:03 -- nvmf/common.sh@158 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:33.578 Cannot find device "nvmf_tgt_br2" 00:25:33.578 17:29:03 -- nvmf/common.sh@159 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:33.578 17:29:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:33.578 17:29:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.578 17:29:03 -- nvmf/common.sh@162 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.578 17:29:03 -- nvmf/common.sh@163 -- # true 00:25:33.578 17:29:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.578 17:29:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.578 17:29:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.578 17:29:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.578 17:29:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.578 17:29:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.578 17:29:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.578 17:29:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:33.578 17:29:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:33.578 17:29:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:33.578 17:29:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:33.578 17:29:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:33.578 17:29:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:33.578 17:29:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.578 17:29:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.578 17:29:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.578 17:29:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:33.578 17:29:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:33.578 17:29:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.578 17:29:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:33.837 17:29:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.837 17:29:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.837 17:29:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.837 17:29:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:33.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:25:33.837 00:25:33.837 --- 10.0.0.2 ping statistics --- 00:25:33.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.837 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:33.837 17:29:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:33.837 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.837 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:25:33.837 00:25:33.837 --- 10.0.0.3 ping statistics --- 00:25:33.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.837 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:33.837 17:29:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:25:33.837 00:25:33.837 --- 10.0.0.1 ping statistics --- 00:25:33.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.837 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:33.837 17:29:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.837 17:29:03 -- nvmf/common.sh@422 -- # return 0 00:25:33.837 17:29:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:33.837 17:29:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.837 17:29:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:33.837 17:29:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:33.837 17:29:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.837 17:29:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:33.837 17:29:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:33.837 17:29:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:33.837 17:29:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:33.837 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:25:33.837 17:29:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:33.837 17:29:03 -- common/autotest_common.sh@1510 -- # bdfs=() 00:25:33.837 17:29:03 -- common/autotest_common.sh@1510 -- # local bdfs 00:25:33.837 17:29:03 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:25:33.837 17:29:03 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:25:33.837 17:29:03 -- common/autotest_common.sh@1499 -- # bdfs=() 00:25:33.837 17:29:03 -- common/autotest_common.sh@1499 -- # local bdfs 00:25:33.837 17:29:03 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:33.837 17:29:03 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:33.837 17:29:03 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:25:33.837 17:29:03 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:25:33.837 17:29:03 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:33.837 17:29:03 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:25:33.837 17:29:03 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:25:33.837 17:29:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:25:33.837 17:29:03 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:33.837 17:29:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:33.837 17:29:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:34.096 17:29:03 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:34.096 17:29:03 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:34.096 17:29:03 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:34.096 17:29:03 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:34.096 17:29:04 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:34.096 17:29:04 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:34.096 17:29:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:34.096 17:29:04 -- common/autotest_common.sh@10 -- # set +x 00:25:34.355 17:29:04 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:34.355 17:29:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:34.355 17:29:04 -- common/autotest_common.sh@10 -- # set +x 00:25:34.355 17:29:04 -- target/identify_passthru.sh@31 -- # nvmfpid=96555 00:25:34.355 17:29:04 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:34.355 17:29:04 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.355 17:29:04 -- target/identify_passthru.sh@35 -- # waitforlisten 96555 00:25:34.355 17:29:04 -- common/autotest_common.sh@817 -- # '[' -z 96555 ']' 00:25:34.355 17:29:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.355 17:29:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:34.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.355 17:29:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.355 17:29:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:34.355 17:29:04 -- common/autotest_common.sh@10 -- # set +x 00:25:34.355 [2024-04-25 17:29:04.157932] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:34.355 [2024-04-25 17:29:04.158529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.355 [2024-04-25 17:29:04.297905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:34.613 [2024-04-25 17:29:04.351851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.614 [2024-04-25 17:29:04.351893] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.614 [2024-04-25 17:29:04.351919] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.614 [2024-04-25 17:29:04.351926] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.614 [2024-04-25 17:29:04.351931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.614 [2024-04-25 17:29:04.352095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.614 [2024-04-25 17:29:04.352178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.614 [2024-04-25 17:29:04.353635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.614 [2024-04-25 17:29:04.353675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.181 17:29:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:35.181 17:29:05 -- common/autotest_common.sh@850 -- # return 0 00:25:35.181 17:29:05 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:35.181 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.181 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.181 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.181 17:29:05 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:35.181 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.181 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.181 [2024-04-25 17:29:05.144217] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:35.181 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.181 17:29:05 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.181 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.181 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.181 [2024-04-25 17:29:05.153514] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:35.440 17:29:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 17:29:05 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:25:35.440 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 Nvme0n1 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:35.440 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:35.440 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.440 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 [2024-04-25 17:29:05.295252] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:35.440 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.440 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 [2024-04-25 17:29:05.303032] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:35.440 [ 00:25:35.440 { 00:25:35.440 "allow_any_host": true, 00:25:35.440 "hosts": [], 00:25:35.440 "listen_addresses": [], 00:25:35.440 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:35.440 "subtype": "Discovery" 00:25:35.440 }, 00:25:35.440 { 00:25:35.440 "allow_any_host": true, 00:25:35.440 "hosts": [], 00:25:35.440 "listen_addresses": [ 00:25:35.440 { 00:25:35.440 "adrfam": "IPv4", 00:25:35.440 "traddr": "10.0.0.2", 00:25:35.440 "transport": "TCP", 00:25:35.440 "trsvcid": "4420", 00:25:35.440 "trtype": "TCP" 00:25:35.440 } 00:25:35.440 ], 00:25:35.440 "max_cntlid": 65519, 00:25:35.440 "max_namespaces": 1, 00:25:35.440 "min_cntlid": 1, 00:25:35.440 "model_number": "SPDK bdev Controller", 00:25:35.440 "namespaces": [ 00:25:35.440 { 00:25:35.440 "bdev_name": "Nvme0n1", 00:25:35.440 "name": "Nvme0n1", 00:25:35.440 "nguid": "07856565A16F48BDB4E65CBE317663AB", 00:25:35.440 "nsid": 1, 00:25:35.440 "uuid": "07856565-a16f-48bd-b4e6-5cbe317663ab" 00:25:35.440 } 00:25:35.440 ], 00:25:35.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.440 "serial_number": "SPDK00000000000001", 00:25:35.440 "subtype": "NVMe" 00:25:35.440 } 00:25:35.440 ] 00:25:35.440 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.440 17:29:05 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.440 17:29:05 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:35.440 17:29:05 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:35.699 17:29:05 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:35.699 17:29:05 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:35.699 17:29:05 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:35.699 17:29:05 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:35.958 17:29:05 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:35.958 17:29:05 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:35.958 17:29:05 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:35.958 17:29:05 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.958 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.958 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.958 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.958 17:29:05 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:35.958 17:29:05 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:35.958 17:29:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:35.958 17:29:05 -- nvmf/common.sh@117 -- # sync 00:25:35.958 17:29:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.958 17:29:05 -- nvmf/common.sh@120 -- # set +e 00:25:35.958 17:29:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.958 17:29:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.958 rmmod nvme_tcp 00:25:35.958 rmmod nvme_fabrics 00:25:35.958 rmmod nvme_keyring 00:25:35.958 17:29:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.958 17:29:05 -- nvmf/common.sh@124 -- # set -e 00:25:35.958 17:29:05 -- nvmf/common.sh@125 -- # return 0 00:25:35.958 17:29:05 -- nvmf/common.sh@478 -- # '[' -n 96555 ']' 00:25:35.958 17:29:05 -- nvmf/common.sh@479 -- # killprocess 96555 00:25:35.958 17:29:05 -- common/autotest_common.sh@936 -- # '[' -z 96555 ']' 00:25:35.958 17:29:05 -- common/autotest_common.sh@940 -- # kill -0 96555 00:25:35.958 17:29:05 -- common/autotest_common.sh@941 -- # uname 00:25:35.958 17:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.958 17:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96555 00:25:35.958 17:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:35.958 17:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:35.958 killing process with pid 96555 00:25:35.958 17:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96555' 00:25:35.958 17:29:05 -- common/autotest_common.sh@955 -- # kill 96555 00:25:35.958 [2024-04-25 17:29:05.858438] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:35.958 17:29:05 -- common/autotest_common.sh@960 -- # wait 96555 00:25:36.217 17:29:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:36.217 17:29:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:36.217 17:29:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:36.217 17:29:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.217 17:29:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.217 17:29:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.217 17:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:36.217 17:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.217 17:29:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:36.217 00:25:36.217 real 0m2.890s 00:25:36.217 user 0m7.109s 00:25:36.217 sys 0m0.729s 00:25:36.217 17:29:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:36.217 17:29:06 -- common/autotest_common.sh@10 -- # set +x 00:25:36.217 ************************************ 00:25:36.217 END TEST nvmf_identify_passthru 00:25:36.217 ************************************ 00:25:36.217 17:29:06 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:36.217 17:29:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:36.218 17:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.218 17:29:06 -- common/autotest_common.sh@10 -- # set +x 00:25:36.218 ************************************ 00:25:36.218 START TEST nvmf_dif 00:25:36.218 ************************************ 00:25:36.218 17:29:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:36.477 * Looking for test storage... 00:25:36.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:36.477 17:29:06 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:36.477 17:29:06 -- nvmf/common.sh@7 -- # uname -s 00:25:36.477 17:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.477 17:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.477 17:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.477 17:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.477 17:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.477 17:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.477 17:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.477 17:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.477 17:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.477 17:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:36.477 17:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:25:36.477 17:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.477 17:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.477 17:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:36.477 17:29:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.477 17:29:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.477 17:29:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.477 17:29:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.477 17:29:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.477 17:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.477 17:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.477 17:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.477 17:29:06 -- paths/export.sh@5 -- # export PATH 00:25:36.477 17:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.477 17:29:06 -- nvmf/common.sh@47 -- # : 0 00:25:36.477 17:29:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.477 17:29:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.477 17:29:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.477 17:29:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.477 17:29:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.477 17:29:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.477 17:29:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.477 17:29:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.477 17:29:06 -- target/dif.sh@15 -- # NULL_META=16 00:25:36.477 17:29:06 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:36.477 17:29:06 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:36.477 17:29:06 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:36.477 17:29:06 -- target/dif.sh@135 -- # nvmftestinit 00:25:36.477 17:29:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:36.477 17:29:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.477 17:29:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:36.477 17:29:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:36.477 17:29:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:36.477 17:29:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.477 17:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:36.477 17:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.477 17:29:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:36.477 17:29:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:36.477 17:29:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.477 17:29:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.477 17:29:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:36.477 17:29:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:36.477 17:29:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:36.477 17:29:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:36.477 17:29:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:36.477 17:29:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.477 17:29:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:36.477 17:29:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:36.477 17:29:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:36.477 17:29:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:36.477 17:29:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:36.477 17:29:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:36.477 Cannot find device "nvmf_tgt_br" 00:25:36.477 17:29:06 -- nvmf/common.sh@155 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.477 Cannot find device "nvmf_tgt_br2" 00:25:36.477 17:29:06 -- nvmf/common.sh@156 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:36.477 17:29:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:36.477 Cannot find device "nvmf_tgt_br" 00:25:36.477 17:29:06 -- nvmf/common.sh@158 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:36.477 Cannot find device "nvmf_tgt_br2" 00:25:36.477 17:29:06 -- nvmf/common.sh@159 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:36.477 17:29:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:36.477 17:29:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.477 17:29:06 -- nvmf/common.sh@162 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.477 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.477 17:29:06 -- nvmf/common.sh@163 -- # true 00:25:36.477 17:29:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.477 17:29:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.477 17:29:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.477 17:29:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.737 17:29:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.737 17:29:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.737 17:29:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.737 17:29:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:36.737 17:29:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:36.737 17:29:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:36.737 17:29:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:36.737 17:29:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:36.737 17:29:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:36.737 17:29:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.737 17:29:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.737 17:29:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.737 17:29:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:36.737 17:29:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:36.737 17:29:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.737 17:29:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.737 17:29:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.737 17:29:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.737 17:29:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.737 17:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:36.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:25:36.737 00:25:36.737 --- 10.0.0.2 ping statistics --- 00:25:36.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.737 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:36.737 17:29:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:36.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:25:36.737 00:25:36.737 --- 10.0.0.3 ping statistics --- 00:25:36.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.737 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:36.737 17:29:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:25:36.737 00:25:36.737 --- 10.0.0.1 ping statistics --- 00:25:36.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.737 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:25:36.737 17:29:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.737 17:29:06 -- nvmf/common.sh@422 -- # return 0 00:25:36.737 17:29:06 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:25:36.737 17:29:06 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:37.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:37.306 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:37.306 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:37.306 17:29:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.306 17:29:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:37.306 17:29:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:37.306 17:29:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.306 17:29:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:37.306 17:29:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:37.306 17:29:07 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:37.306 17:29:07 -- target/dif.sh@137 -- # nvmfappstart 00:25:37.306 17:29:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:37.306 17:29:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:37.306 17:29:07 -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 17:29:07 -- nvmf/common.sh@470 -- # nvmfpid=96907 00:25:37.306 17:29:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:37.306 17:29:07 -- nvmf/common.sh@471 -- # waitforlisten 96907 00:25:37.306 17:29:07 -- common/autotest_common.sh@817 -- # '[' -z 96907 ']' 00:25:37.306 17:29:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.306 17:29:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:37.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.306 17:29:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.306 17:29:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:37.306 17:29:07 -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 [2024-04-25 17:29:07.183339] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:25:37.306 [2024-04-25 17:29:07.183427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.565 [2024-04-25 17:29:07.324562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.565 [2024-04-25 17:29:07.395140] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.565 [2024-04-25 17:29:07.395193] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.565 [2024-04-25 17:29:07.395207] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.565 [2024-04-25 17:29:07.395217] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.565 [2024-04-25 17:29:07.395226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.565 [2024-04-25 17:29:07.395266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.501 17:29:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:38.501 17:29:08 -- common/autotest_common.sh@850 -- # return 0 00:25:38.501 17:29:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:38.501 17:29:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 17:29:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.501 17:29:08 -- target/dif.sh@139 -- # create_transport 00:25:38.501 17:29:08 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:38.501 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 [2024-04-25 17:29:08.225012] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.501 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.501 17:29:08 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:38.501 17:29:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:38.501 17:29:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 ************************************ 00:25:38.501 START TEST fio_dif_1_default 00:25:38.501 ************************************ 00:25:38.501 17:29:08 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:25:38.501 17:29:08 -- target/dif.sh@86 -- # create_subsystems 0 00:25:38.501 17:29:08 -- target/dif.sh@28 -- # local sub 00:25:38.501 17:29:08 -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.501 17:29:08 -- target/dif.sh@31 -- # create_subsystem 0 00:25:38.501 17:29:08 -- target/dif.sh@18 -- # local sub_id=0 00:25:38.501 17:29:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:38.501 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 bdev_null0 00:25:38.501 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.501 17:29:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:38.501 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.501 17:29:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:38.501 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.501 17:29:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.501 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.501 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:25:38.501 [2024-04-25 17:29:08.333157] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.501 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.501 17:29:08 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:38.501 17:29:08 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:38.501 17:29:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:38.501 17:29:08 -- nvmf/common.sh@521 -- # config=() 00:25:38.501 17:29:08 -- nvmf/common.sh@521 -- # local subsystem config 00:25:38.501 17:29:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.501 17:29:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:38.501 17:29:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:38.501 { 00:25:38.501 "params": { 00:25:38.501 "name": "Nvme$subsystem", 00:25:38.501 "trtype": "$TEST_TRANSPORT", 00:25:38.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.501 "adrfam": "ipv4", 00:25:38.501 "trsvcid": "$NVMF_PORT", 00:25:38.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.501 "hdgst": ${hdgst:-false}, 00:25:38.501 "ddgst": ${ddgst:-false} 00:25:38.501 }, 00:25:38.501 "method": "bdev_nvme_attach_controller" 00:25:38.501 } 00:25:38.501 EOF 00:25:38.501 )") 00:25:38.501 17:29:08 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.501 17:29:08 -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.501 17:29:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:38.501 17:29:08 -- target/dif.sh@54 -- # local file 00:25:38.501 17:29:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.501 17:29:08 -- target/dif.sh@56 -- # cat 00:25:38.501 17:29:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:38.501 17:29:08 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.501 17:29:08 -- common/autotest_common.sh@1327 -- # shift 00:25:38.501 17:29:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:38.501 17:29:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.501 17:29:08 -- nvmf/common.sh@543 -- # cat 00:25:38.501 17:29:08 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.501 17:29:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.501 17:29:08 -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.501 17:29:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:38.501 17:29:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:38.501 17:29:08 -- nvmf/common.sh@545 -- # jq . 00:25:38.501 17:29:08 -- nvmf/common.sh@546 -- # IFS=, 00:25:38.501 17:29:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:38.501 "params": { 00:25:38.501 "name": "Nvme0", 00:25:38.501 "trtype": "tcp", 00:25:38.501 "traddr": "10.0.0.2", 00:25:38.501 "adrfam": "ipv4", 00:25:38.501 "trsvcid": "4420", 00:25:38.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.501 "hdgst": false, 00:25:38.501 "ddgst": false 00:25:38.501 }, 00:25:38.501 "method": "bdev_nvme_attach_controller" 00:25:38.501 }' 00:25:38.501 17:29:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:38.501 17:29:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:38.501 17:29:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.501 17:29:08 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.502 17:29:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:38.502 17:29:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:38.502 17:29:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:38.502 17:29:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:38.502 17:29:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:38.502 17:29:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.760 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:38.760 fio-3.35 00:25:38.760 Starting 1 thread 00:25:51.002 00:25:51.002 filename0: (groupid=0, jobs=1): err= 0: pid=97001: Thu Apr 25 17:29:19 2024 00:25:51.002 read: IOPS=1119, BW=4479KiB/s (4587kB/s)(43.9MiB/10027msec) 00:25:51.002 slat (nsec): min=5806, max=46938, avg=7535.74, stdev=3172.20 00:25:51.002 clat (usec): min=351, max=41955, avg=3549.22, stdev=10815.54 00:25:51.002 lat (usec): min=356, max=41965, avg=3556.76, stdev=10815.56 00:25:51.002 clat percentiles (usec): 00:25:51.002 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 371], 20.00th=[ 388], 00:25:51.002 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 429], 00:25:51.002 | 70.00th=[ 441], 80.00th=[ 457], 90.00th=[ 523], 95.00th=[40633], 00:25:51.002 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:51.002 | 99.99th=[42206] 00:25:51.002 bw ( KiB/s): min= 1568, max= 6784, per=100.00%, avg=4489.60, stdev=1484.98, samples=20 00:25:51.002 iops : min= 392, max= 1696, avg=1122.40, stdev=371.25, samples=20 00:25:51.002 lat (usec) : 500=88.46%, 750=3.78% 00:25:51.002 lat (msec) : 4=0.04%, 50=7.73% 00:25:51.002 cpu : usr=91.81%, sys=7.53%, ctx=26, majf=0, minf=0 00:25:51.002 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:51.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.002 issued rwts: total=11228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.002 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:51.002 00:25:51.002 Run status group 0 (all jobs): 00:25:51.002 READ: bw=4479KiB/s (4587kB/s), 4479KiB/s-4479KiB/s (4587kB/s-4587kB/s), io=43.9MiB (46.0MB), run=10027-10027msec 00:25:51.002 17:29:19 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:51.003 17:29:19 -- target/dif.sh@43 -- # local sub 00:25:51.003 17:29:19 -- target/dif.sh@45 -- # for sub in "$@" 00:25:51.003 17:29:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:51.003 17:29:19 -- target/dif.sh@36 -- # local sub_id=0 00:25:51.003 17:29:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 00:25:51.003 real 0m10.937s 00:25:51.003 user 0m9.824s 00:25:51.003 sys 0m0.963s 00:25:51.003 17:29:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 ************************************ 00:25:51.003 END TEST fio_dif_1_default 00:25:51.003 ************************************ 00:25:51.003 17:29:19 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:51.003 17:29:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:51.003 17:29:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 ************************************ 00:25:51.003 START TEST fio_dif_1_multi_subsystems 00:25:51.003 ************************************ 00:25:51.003 17:29:19 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:25:51.003 17:29:19 -- target/dif.sh@92 -- # local files=1 00:25:51.003 17:29:19 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:51.003 17:29:19 -- target/dif.sh@28 -- # local sub 00:25:51.003 17:29:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:51.003 17:29:19 -- target/dif.sh@31 -- # create_subsystem 0 00:25:51.003 17:29:19 -- target/dif.sh@18 -- # local sub_id=0 00:25:51.003 17:29:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 bdev_null0 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 [2024-04-25 17:29:19.378108] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:51.003 17:29:19 -- target/dif.sh@31 -- # create_subsystem 1 00:25:51.003 17:29:19 -- target/dif.sh@18 -- # local sub_id=1 00:25:51.003 17:29:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 bdev_null1 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.003 17:29:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:51.003 17:29:19 -- common/autotest_common.sh@10 -- # set +x 00:25:51.003 17:29:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.003 17:29:19 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:51.003 17:29:19 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:51.003 17:29:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:51.003 17:29:19 -- nvmf/common.sh@521 -- # config=() 00:25:51.003 17:29:19 -- nvmf/common.sh@521 -- # local subsystem config 00:25:51.003 17:29:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.003 17:29:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:51.003 17:29:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:51.003 { 00:25:51.003 "params": { 00:25:51.003 "name": "Nvme$subsystem", 00:25:51.003 "trtype": "$TEST_TRANSPORT", 00:25:51.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.003 "adrfam": "ipv4", 00:25:51.003 "trsvcid": "$NVMF_PORT", 00:25:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.003 "hdgst": ${hdgst:-false}, 00:25:51.003 "ddgst": ${ddgst:-false} 00:25:51.003 }, 00:25:51.003 "method": "bdev_nvme_attach_controller" 00:25:51.003 } 00:25:51.003 EOF 00:25:51.003 )") 00:25:51.003 17:29:19 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.003 17:29:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:51.003 17:29:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.003 17:29:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:51.003 17:29:19 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:51.003 17:29:19 -- common/autotest_common.sh@1327 -- # shift 00:25:51.003 17:29:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:51.003 17:29:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.003 17:29:19 -- nvmf/common.sh@543 -- # cat 00:25:51.003 17:29:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:51.003 17:29:19 -- target/dif.sh@54 -- # local file 00:25:51.003 17:29:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:51.003 17:29:19 -- target/dif.sh@56 -- # cat 00:25:51.003 17:29:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:51.003 17:29:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:51.003 17:29:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:51.003 17:29:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:51.003 { 00:25:51.003 "params": { 00:25:51.003 "name": "Nvme$subsystem", 00:25:51.003 "trtype": "$TEST_TRANSPORT", 00:25:51.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.003 "adrfam": "ipv4", 00:25:51.003 "trsvcid": "$NVMF_PORT", 00:25:51.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.003 "hdgst": ${hdgst:-false}, 00:25:51.003 "ddgst": ${ddgst:-false} 00:25:51.003 }, 00:25:51.003 "method": "bdev_nvme_attach_controller" 00:25:51.003 } 00:25:51.003 EOF 00:25:51.003 )") 00:25:51.003 17:29:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:51.003 17:29:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:51.003 17:29:19 -- target/dif.sh@73 -- # cat 00:25:51.003 17:29:19 -- nvmf/common.sh@543 -- # cat 00:25:51.003 17:29:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:51.003 17:29:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:51.003 17:29:19 -- nvmf/common.sh@545 -- # jq . 00:25:51.003 17:29:19 -- nvmf/common.sh@546 -- # IFS=, 00:25:51.003 17:29:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:51.003 "params": { 00:25:51.003 "name": "Nvme0", 00:25:51.003 "trtype": "tcp", 00:25:51.003 "traddr": "10.0.0.2", 00:25:51.003 "adrfam": "ipv4", 00:25:51.004 "trsvcid": "4420", 00:25:51.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:51.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:51.004 "hdgst": false, 00:25:51.004 "ddgst": false 00:25:51.004 }, 00:25:51.004 "method": "bdev_nvme_attach_controller" 00:25:51.004 },{ 00:25:51.004 "params": { 00:25:51.004 "name": "Nvme1", 00:25:51.004 "trtype": "tcp", 00:25:51.004 "traddr": "10.0.0.2", 00:25:51.004 "adrfam": "ipv4", 00:25:51.004 "trsvcid": "4420", 00:25:51.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.004 "hdgst": false, 00:25:51.004 "ddgst": false 00:25:51.004 }, 00:25:51.004 "method": "bdev_nvme_attach_controller" 00:25:51.004 }' 00:25:51.004 17:29:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:51.004 17:29:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:51.004 17:29:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.004 17:29:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:51.004 17:29:19 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:51.004 17:29:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:51.004 17:29:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:51.004 17:29:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:51.004 17:29:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:51.004 17:29:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:51.004 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:51.004 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:51.004 fio-3.35 00:25:51.004 Starting 2 threads 00:26:00.988 00:26:00.988 filename0: (groupid=0, jobs=1): err= 0: pid=97160: Thu Apr 25 17:29:30 2024 00:26:00.988 read: IOPS=171, BW=685KiB/s (701kB/s)(6864KiB/10026msec) 00:26:00.988 slat (nsec): min=6273, max=66058, avg=8153.95, stdev=3367.94 00:26:00.988 clat (usec): min=374, max=41591, avg=23344.17, stdev=20025.79 00:26:00.988 lat (usec): min=380, max=41602, avg=23352.32, stdev=20025.81 00:26:00.988 clat percentiles (usec): 00:26:00.988 | 1.00th=[ 388], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 433], 00:26:00.988 | 30.00th=[ 457], 40.00th=[ 594], 50.00th=[40633], 60.00th=[40633], 00:26:00.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:00.988 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:00.988 | 99.99th=[41681] 00:26:00.988 bw ( KiB/s): min= 480, max= 960, per=49.38%, avg=684.80, stdev=125.62, samples=20 00:26:00.988 iops : min= 120, max= 240, avg=171.20, stdev=31.40, samples=20 00:26:00.988 lat (usec) : 500=38.34%, 750=3.85%, 1000=1.05% 00:26:00.988 lat (msec) : 2=0.12%, 50=56.64% 00:26:00.988 cpu : usr=95.85%, sys=3.79%, ctx=9, majf=0, minf=0 00:26:00.988 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.988 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.988 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.988 filename1: (groupid=0, jobs=1): err= 0: pid=97161: Thu Apr 25 17:29:30 2024 00:26:00.988 read: IOPS=175, BW=702KiB/s (719kB/s)(7024KiB/10006msec) 00:26:00.988 slat (nsec): min=6261, max=70646, avg=8279.99, stdev=3551.47 00:26:00.988 clat (usec): min=375, max=42444, avg=22767.37, stdev=20122.01 00:26:00.988 lat (usec): min=381, max=42454, avg=22775.65, stdev=20121.92 00:26:00.988 clat percentiles (usec): 00:26:00.988 | 1.00th=[ 383], 5.00th=[ 400], 10.00th=[ 412], 20.00th=[ 433], 00:26:00.988 | 30.00th=[ 457], 40.00th=[ 578], 50.00th=[40633], 60.00th=[40633], 00:26:00.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:00.988 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:00.988 | 99.99th=[42206] 00:26:00.988 bw ( KiB/s): min= 544, max= 960, per=50.90%, avg=705.68, stdev=110.58, samples=19 00:26:00.988 iops : min= 136, max= 240, avg=176.42, stdev=27.65, samples=19 00:26:00.988 lat (usec) : 500=36.73%, 750=6.83%, 1000=1.08% 00:26:00.988 lat (msec) : 2=0.23%, 50=55.13% 00:26:00.988 cpu : usr=95.39%, sys=4.23%, ctx=92, majf=0, minf=0 00:26:00.988 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.988 issued rwts: total=1756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.988 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.988 00:26:00.989 Run status group 0 (all jobs): 00:26:00.989 READ: bw=1385KiB/s (1418kB/s), 685KiB/s-702KiB/s (701kB/s-719kB/s), io=13.6MiB (14.2MB), run=10006-10026msec 00:26:00.989 17:29:30 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:00.989 17:29:30 -- target/dif.sh@43 -- # local sub 00:26:00.989 17:29:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.989 17:29:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.989 17:29:30 -- target/dif.sh@36 -- # local sub_id=0 00:26:00.989 17:29:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.989 17:29:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:00.989 17:29:30 -- target/dif.sh@36 -- # local sub_id=1 00:26:00.989 17:29:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 00:26:00.989 real 0m11.111s 00:26:00.989 user 0m19.916s 00:26:00.989 sys 0m1.051s 00:26:00.989 17:29:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 ************************************ 00:26:00.989 END TEST fio_dif_1_multi_subsystems 00:26:00.989 ************************************ 00:26:00.989 17:29:30 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:00.989 17:29:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:00.989 17:29:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 ************************************ 00:26:00.989 START TEST fio_dif_rand_params 00:26:00.989 ************************************ 00:26:00.989 17:29:30 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:26:00.989 17:29:30 -- target/dif.sh@100 -- # local NULL_DIF 00:26:00.989 17:29:30 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:00.989 17:29:30 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:00.989 17:29:30 -- target/dif.sh@103 -- # bs=128k 00:26:00.989 17:29:30 -- target/dif.sh@103 -- # numjobs=3 00:26:00.989 17:29:30 -- target/dif.sh@103 -- # iodepth=3 00:26:00.989 17:29:30 -- target/dif.sh@103 -- # runtime=5 00:26:00.989 17:29:30 -- target/dif.sh@105 -- # create_subsystems 0 00:26:00.989 17:29:30 -- target/dif.sh@28 -- # local sub 00:26:00.989 17:29:30 -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.989 17:29:30 -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.989 17:29:30 -- target/dif.sh@18 -- # local sub_id=0 00:26:00.989 17:29:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 bdev_null0 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.989 17:29:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.989 17:29:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.989 [2024-04-25 17:29:30.606695] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.989 17:29:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.989 17:29:30 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:00.989 17:29:30 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:00.989 17:29:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:00.989 17:29:30 -- nvmf/common.sh@521 -- # config=() 00:26:00.989 17:29:30 -- nvmf/common.sh@521 -- # local subsystem config 00:26:00.989 17:29:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.989 17:29:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:00.989 17:29:30 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.989 17:29:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:00.989 { 00:26:00.989 "params": { 00:26:00.989 "name": "Nvme$subsystem", 00:26:00.989 "trtype": "$TEST_TRANSPORT", 00:26:00.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.989 "adrfam": "ipv4", 00:26:00.989 "trsvcid": "$NVMF_PORT", 00:26:00.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.989 "hdgst": ${hdgst:-false}, 00:26:00.989 "ddgst": ${ddgst:-false} 00:26:00.989 }, 00:26:00.989 "method": "bdev_nvme_attach_controller" 00:26:00.989 } 00:26:00.989 EOF 00:26:00.989 )") 00:26:00.989 17:29:30 -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.989 17:29:30 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:00.989 17:29:30 -- target/dif.sh@54 -- # local file 00:26:00.989 17:29:30 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.989 17:29:30 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:00.989 17:29:30 -- target/dif.sh@56 -- # cat 00:26:00.989 17:29:30 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.989 17:29:30 -- common/autotest_common.sh@1327 -- # shift 00:26:00.989 17:29:30 -- nvmf/common.sh@543 -- # cat 00:26:00.989 17:29:30 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:00.989 17:29:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:00.989 17:29:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.989 17:29:30 -- nvmf/common.sh@545 -- # jq . 00:26:00.989 17:29:30 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.989 17:29:30 -- nvmf/common.sh@546 -- # IFS=, 00:26:00.989 17:29:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:00.989 "params": { 00:26:00.989 "name": "Nvme0", 00:26:00.989 "trtype": "tcp", 00:26:00.989 "traddr": "10.0.0.2", 00:26:00.989 "adrfam": "ipv4", 00:26:00.989 "trsvcid": "4420", 00:26:00.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.989 "hdgst": false, 00:26:00.989 "ddgst": false 00:26:00.989 }, 00:26:00.989 "method": "bdev_nvme_attach_controller" 00:26:00.989 }' 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:00.989 17:29:30 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:00.989 17:29:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:00.989 17:29:30 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:00.989 17:29:30 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:00.989 17:29:30 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.989 17:29:30 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.989 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:00.989 ... 00:26:00.989 fio-3.35 00:26:00.989 Starting 3 threads 00:26:07.549 00:26:07.549 filename0: (groupid=0, jobs=1): err= 0: pid=97322: Thu Apr 25 17:29:36 2024 00:26:07.549 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(185MiB/5004msec) 00:26:07.549 slat (nsec): min=6622, max=37503, avg=11043.45, stdev=3717.01 00:26:07.549 clat (usec): min=5097, max=50405, avg=10132.44, stdev=2790.15 00:26:07.549 lat (usec): min=5107, max=50416, avg=10143.48, stdev=2790.06 00:26:07.549 clat percentiles (usec): 00:26:07.549 | 1.00th=[ 6128], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[ 9372], 00:26:07.549 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:26:07.549 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:26:07.549 | 99.00th=[12649], 99.50th=[13698], 99.90th=[50594], 99.95th=[50594], 00:26:07.549 | 99.99th=[50594] 00:26:07.549 bw ( KiB/s): min=34560, max=39936, per=38.01%, avg=37811.20, stdev=1512.35, samples=10 00:26:07.549 iops : min= 270, max= 312, avg=295.40, stdev=11.82, samples=10 00:26:07.549 lat (msec) : 10=43.88%, 20=55.71%, 50=0.07%, 100=0.34% 00:26:07.549 cpu : usr=92.26%, sys=6.26%, ctx=11, majf=0, minf=9 00:26:07.549 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.549 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.549 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.549 filename0: (groupid=0, jobs=1): err= 0: pid=97323: Thu Apr 25 17:29:36 2024 00:26:07.549 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5004msec) 00:26:07.549 slat (nsec): min=6481, max=52802, avg=10509.91, stdev=4210.62 00:26:07.550 clat (usec): min=3238, max=53085, avg=11449.49, stdev=4848.82 00:26:07.550 lat (usec): min=3247, max=53095, avg=11460.00, stdev=4848.76 00:26:07.550 clat percentiles (usec): 00:26:07.550 | 1.00th=[ 6652], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:26:07.550 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:26:07.550 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 00:26:07.550 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:26:07.550 | 99.99th=[53216] 00:26:07.550 bw ( KiB/s): min=29184, max=35584, per=33.61%, avg=33433.60, stdev=1862.54, samples=10 00:26:07.550 iops : min= 228, max= 278, avg=261.20, stdev=14.55, samples=10 00:26:07.550 lat (msec) : 4=0.08%, 10=13.60%, 20=84.95%, 50=0.08%, 100=1.30% 00:26:07.550 cpu : usr=92.40%, sys=6.22%, ctx=19, majf=0, minf=0 00:26:07.550 IO depths : 1=6.2%, 2=93.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.550 issued rwts: total=1309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.550 filename0: (groupid=0, jobs=1): err= 0: pid=97324: Thu Apr 25 17:29:36 2024 00:26:07.550 read: IOPS=220, BW=27.5MiB/s (28.8MB/s)(138MiB/5002msec) 00:26:07.550 slat (nsec): min=6479, max=36924, avg=9203.18, stdev=4075.44 00:26:07.550 clat (usec): min=8253, max=17765, avg=13604.25, stdev=1657.55 00:26:07.550 lat (usec): min=8260, max=17779, avg=13613.46, stdev=1657.83 00:26:07.550 clat percentiles (usec): 00:26:07.550 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[12256], 20.00th=[13042], 00:26:07.550 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:26:07.550 | 70.00th=[14484], 80.00th=[14615], 90.00th=[15139], 95.00th=[15401], 00:26:07.550 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:26:07.550 | 99.99th=[17695] 00:26:07.550 bw ( KiB/s): min=26880, max=30720, per=28.26%, avg=28114.40, stdev=1316.92, samples=10 00:26:07.550 iops : min= 210, max= 240, avg=219.60, stdev=10.28, samples=10 00:26:07.550 lat (msec) : 10=7.90%, 20=92.10% 00:26:07.550 cpu : usr=92.70%, sys=6.06%, ctx=51, majf=0, minf=0 00:26:07.550 IO depths : 1=32.8%, 2=67.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.550 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:07.550 00:26:07.550 Run status group 0 (all jobs): 00:26:07.550 READ: bw=97.1MiB/s (102MB/s), 27.5MiB/s-36.9MiB/s (28.8MB/s-38.7MB/s), io=486MiB (510MB), run=5002-5004msec 00:26:07.550 17:29:36 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:07.550 17:29:36 -- target/dif.sh@43 -- # local sub 00:26:07.550 17:29:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:07.550 17:29:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:07.550 17:29:36 -- target/dif.sh@36 -- # local sub_id=0 00:26:07.550 17:29:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # bs=4k 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # numjobs=8 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # iodepth=16 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # runtime= 00:26:07.550 17:29:36 -- target/dif.sh@109 -- # files=2 00:26:07.550 17:29:36 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:07.550 17:29:36 -- target/dif.sh@28 -- # local sub 00:26:07.550 17:29:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.550 17:29:36 -- target/dif.sh@31 -- # create_subsystem 0 00:26:07.550 17:29:36 -- target/dif.sh@18 -- # local sub_id=0 00:26:07.550 17:29:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 bdev_null0 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 [2024-04-25 17:29:36.504430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.550 17:29:36 -- target/dif.sh@31 -- # create_subsystem 1 00:26:07.550 17:29:36 -- target/dif.sh@18 -- # local sub_id=1 00:26:07.550 17:29:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 bdev_null1 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.550 17:29:36 -- target/dif.sh@31 -- # create_subsystem 2 00:26:07.550 17:29:36 -- target/dif.sh@18 -- # local sub_id=2 00:26:07.550 17:29:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 bdev_null2 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:07.550 17:29:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.550 17:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.550 17:29:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.550 17:29:36 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:07.550 17:29:36 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:07.550 17:29:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:07.550 17:29:36 -- nvmf/common.sh@521 -- # config=() 00:26:07.550 17:29:36 -- nvmf/common.sh@521 -- # local subsystem config 00:26:07.550 17:29:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:07.550 17:29:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:07.550 { 00:26:07.550 "params": { 00:26:07.550 "name": "Nvme$subsystem", 00:26:07.550 "trtype": "$TEST_TRANSPORT", 00:26:07.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.550 "adrfam": "ipv4", 00:26:07.550 "trsvcid": "$NVMF_PORT", 00:26:07.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.550 "hdgst": ${hdgst:-false}, 00:26:07.550 "ddgst": ${ddgst:-false} 00:26:07.550 }, 00:26:07.550 "method": "bdev_nvme_attach_controller" 00:26:07.550 } 00:26:07.550 EOF 00:26:07.550 )") 00:26:07.550 17:29:36 -- target/dif.sh@82 -- # gen_fio_conf 00:26:07.550 17:29:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.550 17:29:36 -- target/dif.sh@54 -- # local file 00:26:07.550 17:29:36 -- target/dif.sh@56 -- # cat 00:26:07.550 17:29:36 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.550 17:29:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:07.550 17:29:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.550 17:29:36 -- nvmf/common.sh@543 -- # cat 00:26:07.550 17:29:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:07.550 17:29:36 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.550 17:29:36 -- common/autotest_common.sh@1327 -- # shift 00:26:07.550 17:29:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:07.550 17:29:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.550 17:29:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:07.550 17:29:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.551 17:29:36 -- target/dif.sh@73 -- # cat 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:07.551 17:29:36 -- target/dif.sh@72 -- # (( file++ )) 00:26:07.551 17:29:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.551 17:29:36 -- target/dif.sh@73 -- # cat 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:07.551 17:29:36 -- target/dif.sh@72 -- # (( file++ )) 00:26:07.551 17:29:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.551 17:29:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:07.551 17:29:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:07.551 { 00:26:07.551 "params": { 00:26:07.551 "name": "Nvme$subsystem", 00:26:07.551 "trtype": "$TEST_TRANSPORT", 00:26:07.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.551 "adrfam": "ipv4", 00:26:07.551 "trsvcid": "$NVMF_PORT", 00:26:07.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.551 "hdgst": ${hdgst:-false}, 00:26:07.551 "ddgst": ${ddgst:-false} 00:26:07.551 }, 00:26:07.551 "method": "bdev_nvme_attach_controller" 00:26:07.551 } 00:26:07.551 EOF 00:26:07.551 )") 00:26:07.551 17:29:36 -- nvmf/common.sh@543 -- # cat 00:26:07.551 17:29:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:07.551 17:29:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:07.551 { 00:26:07.551 "params": { 00:26:07.551 "name": "Nvme$subsystem", 00:26:07.551 "trtype": "$TEST_TRANSPORT", 00:26:07.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.551 "adrfam": "ipv4", 00:26:07.551 "trsvcid": "$NVMF_PORT", 00:26:07.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.551 "hdgst": ${hdgst:-false}, 00:26:07.551 "ddgst": ${ddgst:-false} 00:26:07.551 }, 00:26:07.551 "method": "bdev_nvme_attach_controller" 00:26:07.551 } 00:26:07.551 EOF 00:26:07.551 )") 00:26:07.551 17:29:36 -- nvmf/common.sh@543 -- # cat 00:26:07.551 17:29:36 -- nvmf/common.sh@545 -- # jq . 00:26:07.551 17:29:36 -- nvmf/common.sh@546 -- # IFS=, 00:26:07.551 17:29:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:07.551 "params": { 00:26:07.551 "name": "Nvme0", 00:26:07.551 "trtype": "tcp", 00:26:07.551 "traddr": "10.0.0.2", 00:26:07.551 "adrfam": "ipv4", 00:26:07.551 "trsvcid": "4420", 00:26:07.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:07.551 "hdgst": false, 00:26:07.551 "ddgst": false 00:26:07.551 }, 00:26:07.551 "method": "bdev_nvme_attach_controller" 00:26:07.551 },{ 00:26:07.551 "params": { 00:26:07.551 "name": "Nvme1", 00:26:07.551 "trtype": "tcp", 00:26:07.551 "traddr": "10.0.0.2", 00:26:07.551 "adrfam": "ipv4", 00:26:07.551 "trsvcid": "4420", 00:26:07.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.551 "hdgst": false, 00:26:07.551 "ddgst": false 00:26:07.551 }, 00:26:07.551 "method": "bdev_nvme_attach_controller" 00:26:07.551 },{ 00:26:07.551 "params": { 00:26:07.551 "name": "Nvme2", 00:26:07.551 "trtype": "tcp", 00:26:07.551 "traddr": "10.0.0.2", 00:26:07.551 "adrfam": "ipv4", 00:26:07.551 "trsvcid": "4420", 00:26:07.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:07.551 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:07.551 "hdgst": false, 00:26:07.551 "ddgst": false 00:26:07.551 }, 00:26:07.551 "method": "bdev_nvme_attach_controller" 00:26:07.551 }' 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:07.551 17:29:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:07.551 17:29:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:07.551 17:29:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:07.551 17:29:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:07.551 17:29:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:07.551 17:29:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.551 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:07.551 ... 00:26:07.551 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:07.551 ... 00:26:07.551 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:07.551 ... 00:26:07.551 fio-3.35 00:26:07.551 Starting 24 threads 00:26:19.751 00:26:19.751 filename0: (groupid=0, jobs=1): err= 0: pid=97419: Thu Apr 25 17:29:47 2024 00:26:19.751 read: IOPS=216, BW=864KiB/s (885kB/s)(8648KiB/10007msec) 00:26:19.751 slat (usec): min=5, max=5019, avg=13.62, stdev=116.76 00:26:19.751 clat (msec): min=33, max=148, avg=73.98, stdev=22.26 00:26:19.751 lat (msec): min=33, max=148, avg=74.00, stdev=22.26 00:26:19.751 clat percentiles (msec): 00:26:19.751 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:26:19.751 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 77], 00:26:19.751 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:26:19.751 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 148], 99.95th=[ 148], 00:26:19.751 | 99.99th=[ 148] 00:26:19.751 bw ( KiB/s): min= 512, max= 1072, per=4.41%, avg=858.30, stdev=144.93, samples=20 00:26:19.751 iops : min= 128, max= 268, avg=214.55, stdev=36.20, samples=20 00:26:19.751 lat (msec) : 50=16.60%, 100=70.17%, 250=13.23% 00:26:19.751 cpu : usr=38.18%, sys=0.98%, ctx=1123, majf=0, minf=9 00:26:19.751 IO depths : 1=0.7%, 2=1.4%, 4=7.6%, 8=77.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:19.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.751 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.751 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.751 filename0: (groupid=0, jobs=1): err= 0: pid=97420: Thu Apr 25 17:29:47 2024 00:26:19.751 read: IOPS=216, BW=866KiB/s (887kB/s)(8684KiB/10022msec) 00:26:19.751 slat (usec): min=3, max=8022, avg=14.27, stdev=171.99 00:26:19.751 clat (msec): min=35, max=167, avg=73.73, stdev=22.48 00:26:19.752 lat (msec): min=35, max=167, avg=73.75, stdev=22.49 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:26:19.752 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 72], 00:26:19.752 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 120], 00:26:19.752 | 99.00th=[ 136], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:26:19.752 | 99.99th=[ 169] 00:26:19.752 bw ( KiB/s): min= 512, max= 1120, per=4.43%, avg=861.80, stdev=170.63, samples=20 00:26:19.752 iops : min= 128, max= 280, avg=215.45, stdev=42.66, samples=20 00:26:19.752 lat (msec) : 50=16.95%, 100=70.38%, 250=12.67% 00:26:19.752 cpu : usr=32.25%, sys=0.92%, ctx=900, majf=0, minf=9 00:26:19.752 IO depths : 1=1.1%, 2=2.3%, 4=8.0%, 8=75.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97421: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=220, BW=883KiB/s (904kB/s)(8864KiB/10042msec) 00:26:19.752 slat (usec): min=4, max=8021, avg=15.54, stdev=190.29 00:26:19.752 clat (usec): min=1438, max=168865, avg=72361.00, stdev=26630.44 00:26:19.752 lat (usec): min=1446, max=168874, avg=72376.54, stdev=26628.41 00:26:19.752 clat percentiles (usec): 00:26:19.752 | 1.00th=[ 1729], 5.00th=[ 28443], 10.00th=[ 47973], 20.00th=[ 51643], 00:26:19.752 | 30.00th=[ 60031], 40.00th=[ 68682], 50.00th=[ 71828], 60.00th=[ 73925], 00:26:19.752 | 70.00th=[ 81265], 80.00th=[ 93848], 90.00th=[107480], 95.00th=[117965], 00:26:19.752 | 99.00th=[145753], 99.50th=[147850], 99.90th=[168821], 99.95th=[168821], 00:26:19.752 | 99.99th=[168821] 00:26:19.752 bw ( KiB/s): min= 640, max= 1650, per=4.52%, avg=879.10, stdev=225.08, samples=20 00:26:19.752 iops : min= 160, max= 412, avg=219.75, stdev=56.18, samples=20 00:26:19.752 lat (msec) : 2=2.17%, 10=2.17%, 50=12.32%, 100=67.33%, 250=16.02% 00:26:19.752 cpu : usr=36.19%, sys=1.12%, ctx=1042, majf=0, minf=0 00:26:19.752 IO depths : 1=1.0%, 2=2.3%, 4=9.3%, 8=75.0%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97422: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=175, BW=700KiB/s (717kB/s)(7016KiB/10021msec) 00:26:19.752 slat (usec): min=4, max=4024, avg=21.56, stdev=204.74 00:26:19.752 clat (msec): min=26, max=193, avg=91.20, stdev=25.60 00:26:19.752 lat (msec): min=26, max=193, avg=91.22, stdev=25.60 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 40], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 72], 00:26:19.752 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 95], 00:26:19.752 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 125], 95.00th=[ 146], 00:26:19.752 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 194], 99.95th=[ 194], 00:26:19.752 | 99.99th=[ 194] 00:26:19.752 bw ( KiB/s): min= 512, max= 896, per=3.57%, avg=695.15, stdev=107.68, samples=20 00:26:19.752 iops : min= 128, max= 224, avg=173.70, stdev=26.98, samples=20 00:26:19.752 lat (msec) : 50=2.05%, 100=65.39%, 250=32.55% 00:26:19.752 cpu : usr=38.84%, sys=1.10%, ctx=1210, majf=0, minf=9 00:26:19.752 IO depths : 1=3.3%, 2=7.1%, 4=17.7%, 8=62.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=92.0%, 8=2.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=1754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97423: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=230, BW=921KiB/s (943kB/s)(9236KiB/10032msec) 00:26:19.752 slat (usec): min=4, max=8019, avg=13.73, stdev=166.71 00:26:19.752 clat (msec): min=33, max=150, avg=69.41, stdev=21.12 00:26:19.752 lat (msec): min=33, max=150, avg=69.43, stdev=21.12 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 48], 00:26:19.752 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 00:26:19.752 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 110], 00:26:19.752 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 150], 99.95th=[ 150], 00:26:19.752 | 99.99th=[ 150] 00:26:19.752 bw ( KiB/s): min= 560, max= 1248, per=4.72%, avg=917.10, stdev=165.70, samples=20 00:26:19.752 iops : min= 140, max= 312, avg=229.25, stdev=41.41, samples=20 00:26:19.752 lat (msec) : 50=23.04%, 100=67.43%, 250=9.53% 00:26:19.752 cpu : usr=36.08%, sys=0.90%, ctx=1013, majf=0, minf=9 00:26:19.752 IO depths : 1=0.1%, 2=0.3%, 4=4.8%, 8=80.8%, 16=14.0%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=88.8%, 8=7.3%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97424: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=184, BW=736KiB/s (754kB/s)(7384KiB/10031msec) 00:26:19.752 slat (usec): min=3, max=3059, avg=12.99, stdev=71.08 00:26:19.752 clat (msec): min=45, max=178, avg=86.83, stdev=21.10 00:26:19.752 lat (msec): min=45, max=178, avg=86.84, stdev=21.10 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 46], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 72], 00:26:19.752 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 86], 00:26:19.752 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 123], 00:26:19.752 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 180], 00:26:19.752 | 99.99th=[ 180] 00:26:19.752 bw ( KiB/s): min= 640, max= 848, per=3.76%, avg=731.80, stdev=67.57, samples=20 00:26:19.752 iops : min= 160, max= 212, avg=182.90, stdev=16.84, samples=20 00:26:19.752 lat (msec) : 50=2.49%, 100=70.26%, 250=27.25% 00:26:19.752 cpu : usr=42.97%, sys=1.05%, ctx=1418, majf=0, minf=9 00:26:19.752 IO depths : 1=3.0%, 2=6.1%, 4=14.6%, 8=66.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97425: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=176, BW=705KiB/s (722kB/s)(7064KiB/10014msec) 00:26:19.752 slat (usec): min=4, max=4022, avg=17.74, stdev=165.23 00:26:19.752 clat (msec): min=24, max=168, avg=90.56, stdev=27.58 00:26:19.752 lat (msec): min=24, max=168, avg=90.58, stdev=27.58 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 71], 00:26:19.752 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 92], 00:26:19.752 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 136], 95.00th=[ 148], 00:26:19.752 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:26:19.752 | 99.99th=[ 169] 00:26:19.752 bw ( KiB/s): min= 384, max= 896, per=3.60%, avg=700.10, stdev=136.87, samples=20 00:26:19.752 iops : min= 96, max= 224, avg=174.95, stdev=34.23, samples=20 00:26:19.752 lat (msec) : 50=3.45%, 100=64.27%, 250=32.28% 00:26:19.752 cpu : usr=45.03%, sys=1.46%, ctx=1142, majf=0, minf=9 00:26:19.752 IO depths : 1=4.0%, 2=8.4%, 4=19.3%, 8=59.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename0: (groupid=0, jobs=1): err= 0: pid=97426: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=202, BW=811KiB/s (831kB/s)(8136KiB/10030msec) 00:26:19.752 slat (usec): min=3, max=3294, avg=12.58, stdev=73.65 00:26:19.752 clat (msec): min=36, max=167, avg=78.70, stdev=27.51 00:26:19.752 lat (msec): min=36, max=167, avg=78.72, stdev=27.51 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 53], 00:26:19.752 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 81], 00:26:19.752 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 136], 00:26:19.752 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:26:19.752 | 99.99th=[ 169] 00:26:19.752 bw ( KiB/s): min= 512, max= 1168, per=4.15%, avg=807.25, stdev=201.44, samples=20 00:26:19.752 iops : min= 128, max= 292, avg=201.80, stdev=50.35, samples=20 00:26:19.752 lat (msec) : 50=15.14%, 100=63.67%, 250=21.19% 00:26:19.752 cpu : usr=39.50%, sys=1.19%, ctx=1528, majf=0, minf=9 00:26:19.752 IO depths : 1=0.9%, 2=2.0%, 4=8.7%, 8=75.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.752 filename1: (groupid=0, jobs=1): err= 0: pid=97427: Thu Apr 25 17:29:47 2024 00:26:19.752 read: IOPS=228, BW=914KiB/s (936kB/s)(9172KiB/10034msec) 00:26:19.752 slat (usec): min=6, max=8020, avg=24.20, stdev=289.59 00:26:19.752 clat (msec): min=36, max=154, avg=69.80, stdev=20.88 00:26:19.752 lat (msec): min=36, max=154, avg=69.82, stdev=20.88 00:26:19.752 clat percentiles (msec): 00:26:19.752 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 49], 00:26:19.752 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:26:19.752 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 108], 00:26:19.752 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 155], 99.95th=[ 155], 00:26:19.752 | 99.99th=[ 155] 00:26:19.752 bw ( KiB/s): min= 688, max= 1168, per=4.68%, avg=910.45, stdev=139.98, samples=20 00:26:19.752 iops : min= 172, max= 292, avg=227.60, stdev=35.00, samples=20 00:26:19.752 lat (msec) : 50=22.90%, 100=68.34%, 250=8.77% 00:26:19.752 cpu : usr=38.30%, sys=1.10%, ctx=1057, majf=0, minf=9 00:26:19.752 IO depths : 1=0.5%, 2=1.0%, 4=6.7%, 8=78.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:19.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 complete : 0=0.0%, 4=89.1%, 8=6.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.752 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97428: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=203, BW=813KiB/s (832kB/s)(8144KiB/10021msec) 00:26:19.753 slat (nsec): min=6071, max=28435, avg=10484.20, stdev=3105.24 00:26:19.753 clat (msec): min=34, max=179, avg=78.59, stdev=23.34 00:26:19.753 lat (msec): min=34, max=179, avg=78.60, stdev=23.34 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:26:19.753 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:26:19.753 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 00:26:19.753 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 180], 00:26:19.753 | 99.99th=[ 180] 00:26:19.753 bw ( KiB/s): min= 560, max= 1024, per=4.17%, avg=811.90, stdev=119.13, samples=20 00:26:19.753 iops : min= 140, max= 256, avg=202.95, stdev=29.78, samples=20 00:26:19.753 lat (msec) : 50=15.18%, 100=68.22%, 250=16.60% 00:26:19.753 cpu : usr=32.35%, sys=0.83%, ctx=893, majf=0, minf=9 00:26:19.753 IO depths : 1=0.8%, 2=1.8%, 4=10.3%, 8=74.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97429: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=217, BW=870KiB/s (891kB/s)(8720KiB/10026msec) 00:26:19.753 slat (usec): min=4, max=8026, avg=16.42, stdev=188.68 00:26:19.753 clat (msec): min=34, max=144, avg=73.41, stdev=20.81 00:26:19.753 lat (msec): min=34, max=144, avg=73.43, stdev=20.81 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:26:19.753 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:26:19.753 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 110], 00:26:19.753 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:26:19.753 | 99.99th=[ 144] 00:26:19.753 bw ( KiB/s): min= 552, max= 1080, per=4.46%, avg=867.55, stdev=137.69, samples=20 00:26:19.753 iops : min= 138, max= 270, avg=216.85, stdev=34.38, samples=20 00:26:19.753 lat (msec) : 50=12.61%, 100=74.04%, 250=13.35% 00:26:19.753 cpu : usr=42.08%, sys=1.09%, ctx=1509, majf=0, minf=9 00:26:19.753 IO depths : 1=2.0%, 2=4.0%, 4=12.7%, 8=70.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=90.4%, 8=4.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97430: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=183, BW=734KiB/s (752kB/s)(7340KiB/10001msec) 00:26:19.753 slat (usec): min=4, max=8023, avg=15.14, stdev=187.10 00:26:19.753 clat (msec): min=35, max=183, avg=87.09, stdev=25.84 00:26:19.753 lat (msec): min=35, max=183, avg=87.11, stdev=25.84 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:26:19.753 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 87], 00:26:19.753 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:26:19.753 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 184], 00:26:19.753 | 99.99th=[ 184] 00:26:19.753 bw ( KiB/s): min= 513, max= 1040, per=3.75%, avg=729.79, stdev=123.70, samples=19 00:26:19.753 iops : min= 128, max= 260, avg=182.37, stdev=30.97, samples=19 00:26:19.753 lat (msec) : 50=6.81%, 100=62.72%, 250=30.46% 00:26:19.753 cpu : usr=33.28%, sys=1.06%, ctx=960, majf=0, minf=9 00:26:19.753 IO depths : 1=2.3%, 2=5.2%, 4=15.5%, 8=66.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97431: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=214, BW=856KiB/s (877kB/s)(8584KiB/10028msec) 00:26:19.753 slat (usec): min=3, max=10486, avg=18.75, stdev=252.69 00:26:19.753 clat (msec): min=33, max=157, avg=74.66, stdev=24.43 00:26:19.753 lat (msec): min=33, max=157, avg=74.68, stdev=24.43 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:26:19.753 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 75], 00:26:19.753 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 123], 00:26:19.753 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:26:19.753 | 99.99th=[ 157] 00:26:19.753 bw ( KiB/s): min= 512, max= 1145, per=4.38%, avg=851.65, stdev=173.88, samples=20 00:26:19.753 iops : min= 128, max= 286, avg=212.90, stdev=43.45, samples=20 00:26:19.753 lat (msec) : 50=13.61%, 100=69.11%, 250=17.29% 00:26:19.753 cpu : usr=41.39%, sys=1.19%, ctx=1264, majf=0, minf=9 00:26:19.753 IO depths : 1=0.8%, 2=1.9%, 4=8.0%, 8=76.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97432: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=228, BW=915KiB/s (937kB/s)(9184KiB/10041msec) 00:26:19.753 slat (nsec): min=4629, max=31491, avg=10152.42, stdev=3142.19 00:26:19.753 clat (msec): min=25, max=136, avg=69.86, stdev=21.80 00:26:19.753 lat (msec): min=25, max=136, avg=69.87, stdev=21.80 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 48], 00:26:19.753 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:26:19.753 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 110], 00:26:19.753 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:26:19.753 | 99.99th=[ 136] 00:26:19.753 bw ( KiB/s): min= 688, max= 1224, per=4.69%, avg=912.00, stdev=150.88, samples=20 00:26:19.753 iops : min= 172, max= 306, avg=228.00, stdev=37.72, samples=20 00:26:19.753 lat (msec) : 50=25.61%, 100=64.20%, 250=10.19% 00:26:19.753 cpu : usr=32.27%, sys=0.86%, ctx=923, majf=0, minf=9 00:26:19.753 IO depths : 1=0.3%, 2=0.5%, 4=5.1%, 8=80.1%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=88.9%, 8=7.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97433: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=245, BW=982KiB/s (1005kB/s)(9848KiB/10031msec) 00:26:19.753 slat (nsec): min=3678, max=31787, avg=10014.51, stdev=3305.01 00:26:19.753 clat (msec): min=29, max=190, avg=65.10, stdev=21.29 00:26:19.753 lat (msec): min=29, max=190, avg=65.11, stdev=21.29 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:26:19.753 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 67], 00:26:19.753 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 107], 00:26:19.753 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 190], 99.95th=[ 190], 00:26:19.753 | 99.99th=[ 190] 00:26:19.753 bw ( KiB/s): min= 640, max= 1264, per=5.03%, avg=978.40, stdev=178.34, samples=20 00:26:19.753 iops : min= 160, max= 316, avg=244.60, stdev=44.59, samples=20 00:26:19.753 lat (msec) : 50=28.15%, 100=64.13%, 250=7.72% 00:26:19.753 cpu : usr=42.26%, sys=1.19%, ctx=1509, majf=0, minf=9 00:26:19.753 IO depths : 1=0.6%, 2=1.3%, 4=7.4%, 8=77.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=89.3%, 8=6.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename1: (groupid=0, jobs=1): err= 0: pid=97434: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=190, BW=760KiB/s (779kB/s)(7620KiB/10021msec) 00:26:19.753 slat (usec): min=4, max=8018, avg=15.78, stdev=188.18 00:26:19.753 clat (msec): min=37, max=171, avg=84.00, stdev=25.19 00:26:19.753 lat (msec): min=37, max=171, avg=84.02, stdev=25.19 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:26:19.753 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 87], 00:26:19.753 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 130], 00:26:19.753 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 171], 99.95th=[ 171], 00:26:19.753 | 99.99th=[ 171] 00:26:19.753 bw ( KiB/s): min= 512, max= 984, per=3.90%, avg=758.00, stdev=142.42, samples=20 00:26:19.753 iops : min= 128, max= 246, avg=189.50, stdev=35.61, samples=20 00:26:19.753 lat (msec) : 50=9.82%, 100=63.99%, 250=26.19% 00:26:19.753 cpu : usr=38.01%, sys=1.14%, ctx=891, majf=0, minf=9 00:26:19.753 IO depths : 1=2.0%, 2=4.3%, 4=13.4%, 8=69.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:19.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.753 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.753 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.753 filename2: (groupid=0, jobs=1): err= 0: pid=97435: Thu Apr 25 17:29:47 2024 00:26:19.753 read: IOPS=181, BW=727KiB/s (745kB/s)(7296KiB/10031msec) 00:26:19.753 slat (usec): min=4, max=5020, avg=20.28, stdev=200.49 00:26:19.753 clat (msec): min=39, max=187, avg=87.84, stdev=26.21 00:26:19.753 lat (msec): min=39, max=187, avg=87.86, stdev=26.21 00:26:19.753 clat percentiles (msec): 00:26:19.753 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 69], 00:26:19.753 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 90], 00:26:19.753 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 144], 00:26:19.753 | 99.00th=[ 157], 99.50th=[ 186], 99.90th=[ 188], 99.95th=[ 188], 00:26:19.753 | 99.99th=[ 188] 00:26:19.753 bw ( KiB/s): min= 512, max= 1120, per=3.72%, avg=723.05, stdev=134.03, samples=20 00:26:19.754 iops : min= 128, max= 280, avg=180.75, stdev=33.51, samples=20 00:26:19.754 lat (msec) : 50=4.71%, 100=60.80%, 250=34.48% 00:26:19.754 cpu : usr=41.97%, sys=1.38%, ctx=1189, majf=0, minf=9 00:26:19.754 IO depths : 1=3.7%, 2=7.8%, 4=18.8%, 8=60.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=92.4%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97436: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=176, BW=705KiB/s (722kB/s)(7060KiB/10010msec) 00:26:19.754 slat (nsec): min=3824, max=39405, avg=10965.22, stdev=3599.49 00:26:19.754 clat (msec): min=13, max=165, avg=90.64, stdev=24.39 00:26:19.754 lat (msec): min=13, max=165, avg=90.65, stdev=24.39 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 69], 20.00th=[ 72], 00:26:19.754 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 96], 00:26:19.754 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:26:19.754 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 165], 00:26:19.754 | 99.99th=[ 165] 00:26:19.754 bw ( KiB/s): min= 512, max= 848, per=3.57%, avg=695.79, stdev=114.44, samples=19 00:26:19.754 iops : min= 128, max= 212, avg=173.95, stdev=28.61, samples=19 00:26:19.754 lat (msec) : 20=0.91%, 50=4.48%, 100=59.26%, 250=35.35% 00:26:19.754 cpu : usr=32.44%, sys=0.85%, ctx=890, majf=0, minf=9 00:26:19.754 IO depths : 1=1.5%, 2=3.5%, 4=12.4%, 8=71.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=90.4%, 8=4.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97437: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=175, BW=704KiB/s (721kB/s)(7040KiB/10004msec) 00:26:19.754 slat (usec): min=4, max=8021, avg=19.41, stdev=269.94 00:26:19.754 clat (msec): min=28, max=188, avg=90.82, stdev=26.62 00:26:19.754 lat (msec): min=28, max=188, avg=90.84, stdev=26.62 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 38], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:26:19.754 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 96], 00:26:19.754 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 124], 95.00th=[ 144], 00:26:19.754 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:26:19.754 | 99.99th=[ 188] 00:26:19.754 bw ( KiB/s): min= 512, max= 896, per=3.60%, avg=700.63, stdev=122.79, samples=19 00:26:19.754 iops : min= 128, max= 224, avg=175.16, stdev=30.70, samples=19 00:26:19.754 lat (msec) : 50=4.38%, 100=61.08%, 250=34.55% 00:26:19.754 cpu : usr=33.91%, sys=0.90%, ctx=958, majf=0, minf=9 00:26:19.754 IO depths : 1=3.0%, 2=6.3%, 4=15.3%, 8=65.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97438: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=188, BW=754KiB/s (772kB/s)(7544KiB/10007msec) 00:26:19.754 slat (usec): min=4, max=8021, avg=15.05, stdev=184.50 00:26:19.754 clat (msec): min=34, max=155, avg=84.72, stdev=25.92 00:26:19.754 lat (msec): min=34, max=155, avg=84.74, stdev=25.92 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:26:19.754 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 88], 00:26:19.754 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 136], 00:26:19.754 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:26:19.754 | 99.99th=[ 157] 00:26:19.754 bw ( KiB/s): min= 512, max= 1072, per=3.91%, avg=760.11, stdev=159.36, samples=19 00:26:19.754 iops : min= 128, max= 268, avg=190.00, stdev=39.81, samples=19 00:26:19.754 lat (msec) : 50=10.71%, 100=62.73%, 250=26.56% 00:26:19.754 cpu : usr=37.00%, sys=1.21%, ctx=1091, majf=0, minf=9 00:26:19.754 IO depths : 1=2.1%, 2=4.6%, 4=12.8%, 8=69.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97439: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=235, BW=942KiB/s (964kB/s)(9436KiB/10021msec) 00:26:19.754 slat (usec): min=4, max=4021, avg=17.41, stdev=165.16 00:26:19.754 clat (msec): min=33, max=172, avg=67.82, stdev=20.34 00:26:19.754 lat (msec): min=33, max=172, avg=67.84, stdev=20.34 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:26:19.754 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 71], 00:26:19.754 | 70.00th=[ 77], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:26:19.754 | 99.00th=[ 126], 99.50th=[ 142], 99.90th=[ 174], 99.95th=[ 174], 00:26:19.754 | 99.99th=[ 174] 00:26:19.754 bw ( KiB/s): min= 688, max= 1152, per=4.83%, avg=940.80, stdev=138.74, samples=20 00:26:19.754 iops : min= 172, max= 288, avg=235.20, stdev=34.69, samples=20 00:26:19.754 lat (msec) : 50=22.72%, 100=68.93%, 250=8.35% 00:26:19.754 cpu : usr=46.58%, sys=1.31%, ctx=1286, majf=0, minf=9 00:26:19.754 IO depths : 1=0.6%, 2=1.3%, 4=7.6%, 8=77.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=89.4%, 8=6.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97440: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=205, BW=823KiB/s (843kB/s)(8244KiB/10014msec) 00:26:19.754 slat (usec): min=4, max=4020, avg=12.39, stdev=88.41 00:26:19.754 clat (msec): min=14, max=178, avg=77.66, stdev=25.10 00:26:19.754 lat (msec): min=14, max=178, avg=77.67, stdev=25.10 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:26:19.754 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:26:19.754 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 121], 00:26:19.754 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:26:19.754 | 99.99th=[ 180] 00:26:19.754 bw ( KiB/s): min= 536, max= 1072, per=4.21%, avg=818.00, stdev=150.31, samples=20 00:26:19.754 iops : min= 134, max= 268, avg=204.50, stdev=37.58, samples=20 00:26:19.754 lat (msec) : 20=0.10%, 50=15.43%, 100=65.89%, 250=18.58% 00:26:19.754 cpu : usr=36.35%, sys=0.96%, ctx=984, majf=0, minf=9 00:26:19.754 IO depths : 1=1.6%, 2=3.3%, 4=11.8%, 8=71.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97441: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=175, BW=704KiB/s (721kB/s)(7040KiB/10003msec) 00:26:19.754 slat (usec): min=3, max=3566, avg=12.50, stdev=84.84 00:26:19.754 clat (msec): min=13, max=180, avg=90.83, stdev=25.23 00:26:19.754 lat (msec): min=13, max=180, avg=90.85, stdev=25.23 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 40], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 72], 00:26:19.754 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 96], 00:26:19.754 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 144], 00:26:19.754 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:26:19.754 | 99.99th=[ 180] 00:26:19.754 bw ( KiB/s): min= 512, max= 896, per=3.60%, avg=700.68, stdev=124.55, samples=19 00:26:19.754 iops : min= 128, max= 224, avg=175.16, stdev=31.14, samples=19 00:26:19.754 lat (msec) : 20=0.91%, 50=2.33%, 100=64.32%, 250=32.44% 00:26:19.754 cpu : usr=36.31%, sys=1.08%, ctx=1074, majf=0, minf=9 00:26:19.754 IO depths : 1=3.9%, 2=8.2%, 4=19.3%, 8=60.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 filename2: (groupid=0, jobs=1): err= 0: pid=97442: Thu Apr 25 17:29:47 2024 00:26:19.754 read: IOPS=199, BW=797KiB/s (816kB/s)(7988KiB/10024msec) 00:26:19.754 slat (usec): min=7, max=8026, avg=17.07, stdev=200.59 00:26:19.754 clat (msec): min=34, max=181, avg=80.16, stdev=24.70 00:26:19.754 lat (msec): min=34, max=181, avg=80.17, stdev=24.70 00:26:19.754 clat percentiles (msec): 00:26:19.754 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:26:19.754 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:26:19.754 | 70.00th=[ 94], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 121], 00:26:19.754 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 182], 99.95th=[ 182], 00:26:19.754 | 99.99th=[ 182] 00:26:19.754 bw ( KiB/s): min= 512, max= 1088, per=4.07%, avg=792.25, stdev=141.39, samples=20 00:26:19.754 iops : min= 128, max= 272, avg=198.05, stdev=35.34, samples=20 00:26:19.754 lat (msec) : 50=11.82%, 100=67.50%, 250=20.68% 00:26:19.754 cpu : usr=33.71%, sys=1.09%, ctx=964, majf=0, minf=9 00:26:19.754 IO depths : 1=2.2%, 2=4.8%, 4=13.9%, 8=68.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:19.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.754 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.754 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:19.754 00:26:19.754 Run status group 0 (all jobs): 00:26:19.754 READ: bw=19.0MiB/s (19.9MB/s), 700KiB/s-982KiB/s (717kB/s-1005kB/s), io=191MiB (200MB), run=10001-10042msec 00:26:19.754 17:29:47 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:19.754 17:29:47 -- target/dif.sh@43 -- # local sub 00:26:19.754 17:29:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.754 17:29:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.754 17:29:47 -- target/dif.sh@36 -- # local sub_id=0 00:26:19.755 17:29:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.755 17:29:47 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:19.755 17:29:47 -- target/dif.sh@36 -- # local sub_id=1 00:26:19.755 17:29:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.755 17:29:47 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:19.755 17:29:47 -- target/dif.sh@36 -- # local sub_id=2 00:26:19.755 17:29:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # numjobs=2 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # iodepth=8 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # runtime=5 00:26:19.755 17:29:47 -- target/dif.sh@115 -- # files=1 00:26:19.755 17:29:47 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:19.755 17:29:47 -- target/dif.sh@28 -- # local sub 00:26:19.755 17:29:47 -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.755 17:29:47 -- target/dif.sh@31 -- # create_subsystem 0 00:26:19.755 17:29:47 -- target/dif.sh@18 -- # local sub_id=0 00:26:19.755 17:29:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 bdev_null0 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 [2024-04-25 17:29:47.792138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@30 -- # for sub in "$@" 00:26:19.755 17:29:47 -- target/dif.sh@31 -- # create_subsystem 1 00:26:19.755 17:29:47 -- target/dif.sh@18 -- # local sub_id=1 00:26:19.755 17:29:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 bdev_null1 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.755 17:29:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.755 17:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.755 17:29:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.755 17:29:47 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:19.755 17:29:47 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:19.755 17:29:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:19.755 17:29:47 -- nvmf/common.sh@521 -- # config=() 00:26:19.755 17:29:47 -- nvmf/common.sh@521 -- # local subsystem config 00:26:19.755 17:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:19.755 17:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:19.755 { 00:26:19.755 "params": { 00:26:19.755 "name": "Nvme$subsystem", 00:26:19.755 "trtype": "$TEST_TRANSPORT", 00:26:19.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.755 "adrfam": "ipv4", 00:26:19.755 "trsvcid": "$NVMF_PORT", 00:26:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.755 "hdgst": ${hdgst:-false}, 00:26:19.755 "ddgst": ${ddgst:-false} 00:26:19.755 }, 00:26:19.755 "method": "bdev_nvme_attach_controller" 00:26:19.755 } 00:26:19.755 EOF 00:26:19.755 )") 00:26:19.755 17:29:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.755 17:29:47 -- target/dif.sh@82 -- # gen_fio_conf 00:26:19.755 17:29:47 -- target/dif.sh@54 -- # local file 00:26:19.755 17:29:47 -- target/dif.sh@56 -- # cat 00:26:19.755 17:29:47 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.755 17:29:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:19.755 17:29:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:19.755 17:29:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:19.755 17:29:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.755 17:29:47 -- common/autotest_common.sh@1327 -- # shift 00:26:19.755 17:29:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:19.755 17:29:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.755 17:29:47 -- nvmf/common.sh@543 -- # cat 00:26:19.755 17:29:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:19.755 17:29:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.755 17:29:47 -- target/dif.sh@73 -- # cat 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:19.755 17:29:47 -- target/dif.sh@72 -- # (( file++ )) 00:26:19.755 17:29:47 -- target/dif.sh@72 -- # (( file <= files )) 00:26:19.755 17:29:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:19.755 17:29:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:19.755 { 00:26:19.755 "params": { 00:26:19.755 "name": "Nvme$subsystem", 00:26:19.755 "trtype": "$TEST_TRANSPORT", 00:26:19.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.755 "adrfam": "ipv4", 00:26:19.755 "trsvcid": "$NVMF_PORT", 00:26:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.755 "hdgst": ${hdgst:-false}, 00:26:19.755 "ddgst": ${ddgst:-false} 00:26:19.755 }, 00:26:19.755 "method": "bdev_nvme_attach_controller" 00:26:19.755 } 00:26:19.755 EOF 00:26:19.755 )") 00:26:19.755 17:29:47 -- nvmf/common.sh@543 -- # cat 00:26:19.755 17:29:47 -- nvmf/common.sh@545 -- # jq . 00:26:19.755 17:29:47 -- nvmf/common.sh@546 -- # IFS=, 00:26:19.755 17:29:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:19.755 "params": { 00:26:19.755 "name": "Nvme0", 00:26:19.755 "trtype": "tcp", 00:26:19.755 "traddr": "10.0.0.2", 00:26:19.755 "adrfam": "ipv4", 00:26:19.755 "trsvcid": "4420", 00:26:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:19.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:19.755 "hdgst": false, 00:26:19.755 "ddgst": false 00:26:19.755 }, 00:26:19.755 "method": "bdev_nvme_attach_controller" 00:26:19.755 },{ 00:26:19.755 "params": { 00:26:19.755 "name": "Nvme1", 00:26:19.755 "trtype": "tcp", 00:26:19.755 "traddr": "10.0.0.2", 00:26:19.755 "adrfam": "ipv4", 00:26:19.755 "trsvcid": "4420", 00:26:19.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:19.755 "hdgst": false, 00:26:19.755 "ddgst": false 00:26:19.755 }, 00:26:19.755 "method": "bdev_nvme_attach_controller" 00:26:19.755 }' 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:19.755 17:29:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:19.755 17:29:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:19.755 17:29:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:19.755 17:29:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:19.756 17:29:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:19.756 17:29:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:19.756 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:19.756 ... 00:26:19.756 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:19.756 ... 00:26:19.756 fio-3.35 00:26:19.756 Starting 4 threads 00:26:23.960 00:26:23.960 filename0: (groupid=0, jobs=1): err= 0: pid=97569: Thu Apr 25 17:29:53 2024 00:26:23.960 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5004msec) 00:26:23.960 slat (usec): min=6, max=167, avg= 8.81, stdev= 4.56 00:26:23.960 clat (usec): min=1266, max=4949, avg=3903.73, stdev=206.67 00:26:23.960 lat (usec): min=1282, max=4957, avg=3912.54, stdev=206.58 00:26:23.960 clat percentiles (usec): 00:26:23.960 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3818], 00:26:23.960 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:26:23.960 | 70.00th=[ 3949], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4228], 00:26:23.960 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 4621], 99.95th=[ 4752], 00:26:23.960 | 99.99th=[ 4883] 00:26:23.960 bw ( KiB/s): min=15872, max=16512, per=25.03%, avg=16199.11, stdev=213.33, samples=9 00:26:23.960 iops : min= 1984, max= 2064, avg=2024.89, stdev=26.67, samples=9 00:26:23.960 lat (msec) : 2=0.24%, 4=80.28%, 10=19.49% 00:26:23.960 cpu : usr=93.98%, sys=4.74%, ctx=56, majf=0, minf=0 00:26:23.960 IO depths : 1=11.0%, 2=24.9%, 4=50.1%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 issued rwts: total=10136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.960 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.960 filename0: (groupid=0, jobs=1): err= 0: pid=97570: Thu Apr 25 17:29:53 2024 00:26:23.960 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5003msec) 00:26:23.960 slat (nsec): min=6810, max=54374, avg=13324.31, stdev=5104.69 00:26:23.960 clat (usec): min=2785, max=5978, avg=3894.01, stdev=180.88 00:26:23.960 lat (usec): min=2796, max=6002, avg=3907.33, stdev=180.98 00:26:23.960 clat percentiles (usec): 00:26:23.960 | 1.00th=[ 3654], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3785], 00:26:23.960 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3851], 60.00th=[ 3884], 00:26:23.960 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4228], 00:26:23.960 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5407], 99.95th=[ 5932], 00:26:23.960 | 99.99th=[ 5997] 00:26:23.960 bw ( KiB/s): min=15872, max=16512, per=24.97%, avg=16160.00, stdev=235.15, samples=9 00:26:23.960 iops : min= 1984, max= 2064, avg=2020.00, stdev=29.39, samples=9 00:26:23.960 lat (msec) : 4=82.53%, 10=17.47% 00:26:23.960 cpu : usr=94.04%, sys=4.88%, ctx=5, majf=0, minf=9 00:26:23.960 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.960 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.960 filename1: (groupid=0, jobs=1): err= 0: pid=97571: Thu Apr 25 17:29:53 2024 00:26:23.960 read: IOPS=2023, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5003msec) 00:26:23.960 slat (nsec): min=6691, max=53232, avg=10205.88, stdev=4852.93 00:26:23.960 clat (usec): min=1902, max=5456, avg=3916.35, stdev=186.91 00:26:23.960 lat (usec): min=1910, max=5468, avg=3926.55, stdev=186.99 00:26:23.960 clat percentiles (usec): 00:26:23.960 | 1.00th=[ 3425], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3818], 00:26:23.960 | 30.00th=[ 3851], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:26:23.960 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4293], 00:26:23.960 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 5342], 00:26:23.960 | 99.99th=[ 5473] 00:26:23.960 bw ( KiB/s): min=15872, max=16512, per=24.99%, avg=16176.00, stdev=228.11, samples=9 00:26:23.960 iops : min= 1984, max= 2064, avg=2022.00, stdev=28.51, samples=9 00:26:23.960 lat (msec) : 2=0.04%, 4=79.27%, 10=20.69% 00:26:23.960 cpu : usr=94.72%, sys=4.18%, ctx=11, majf=0, minf=0 00:26:23.960 IO depths : 1=4.0%, 2=8.2%, 4=66.8%, 8=21.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 issued rwts: total=10122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.960 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.960 filename1: (groupid=0, jobs=1): err= 0: pid=97572: Thu Apr 25 17:29:53 2024 00:26:23.960 read: IOPS=2021, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:26:23.960 slat (nsec): min=6646, max=55933, avg=13406.70, stdev=5053.23 00:26:23.960 clat (usec): min=2069, max=6193, avg=3887.66, stdev=184.72 00:26:23.960 lat (usec): min=2081, max=6206, avg=3901.07, stdev=185.25 00:26:23.960 clat percentiles (usec): 00:26:23.960 | 1.00th=[ 3621], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3785], 00:26:23.960 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3884], 00:26:23.960 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4080], 95.00th=[ 4228], 00:26:23.960 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 5538], 99.95th=[ 5800], 00:26:23.960 | 99.99th=[ 6128] 00:26:23.960 bw ( KiB/s): min=15872, max=16512, per=24.97%, avg=16160.00, stdev=232.00, samples=9 00:26:23.960 iops : min= 1984, max= 2064, avg=2020.00, stdev=29.00, samples=9 00:26:23.960 lat (msec) : 4=83.26%, 10=16.74% 00:26:23.960 cpu : usr=94.38%, sys=4.52%, ctx=4, majf=0, minf=0 00:26:23.960 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:23.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.960 issued rwts: total=10112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.960 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:23.960 00:26:23.960 Run status group 0 (all jobs): 00:26:23.960 READ: bw=63.2MiB/s (66.3MB/s), 15.8MiB/s-15.8MiB/s (16.6MB/s-16.6MB/s), io=316MiB (332MB), run=5002-5004msec 00:26:23.960 17:29:53 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:23.960 17:29:53 -- target/dif.sh@43 -- # local sub 00:26:23.960 17:29:53 -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.960 17:29:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:23.960 17:29:53 -- target/dif.sh@36 -- # local sub_id=0 00:26:23.960 17:29:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.960 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.960 17:29:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:23.960 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.960 17:29:53 -- target/dif.sh@45 -- # for sub in "$@" 00:26:23.960 17:29:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:23.960 17:29:53 -- target/dif.sh@36 -- # local sub_id=1 00:26:23.960 17:29:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.960 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.960 17:29:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:23.960 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 ************************************ 00:26:23.960 END TEST fio_dif_rand_params 00:26:23.960 ************************************ 00:26:23.960 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.960 00:26:23.960 real 0m23.235s 00:26:23.960 user 2m5.996s 00:26:23.960 sys 0m5.073s 00:26:23.960 17:29:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 17:29:53 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:23.960 17:29:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.960 17:29:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.960 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.960 ************************************ 00:26:23.960 START TEST fio_dif_digest 00:26:23.960 ************************************ 00:26:23.960 17:29:53 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:26:23.960 17:29:53 -- target/dif.sh@123 -- # local NULL_DIF 00:26:23.960 17:29:53 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:23.960 17:29:53 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:23.960 17:29:53 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:23.960 17:29:53 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:23.960 17:29:53 -- target/dif.sh@127 -- # numjobs=3 00:26:23.960 17:29:53 -- target/dif.sh@127 -- # iodepth=3 00:26:23.960 17:29:53 -- target/dif.sh@127 -- # runtime=10 00:26:23.960 17:29:53 -- target/dif.sh@128 -- # hdgst=true 00:26:23.960 17:29:53 -- target/dif.sh@128 -- # ddgst=true 00:26:23.960 17:29:53 -- target/dif.sh@130 -- # create_subsystems 0 00:26:23.960 17:29:53 -- target/dif.sh@28 -- # local sub 00:26:23.960 17:29:53 -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.960 17:29:53 -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.960 17:29:53 -- target/dif.sh@18 -- # local sub_id=0 00:26:23.960 17:29:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:23.960 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.961 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.961 bdev_null0 00:26:23.961 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.961 17:29:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.219 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.219 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:24.219 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.219 17:29:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.219 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.219 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:24.219 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.219 17:29:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.219 17:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.219 17:29:53 -- common/autotest_common.sh@10 -- # set +x 00:26:24.219 [2024-04-25 17:29:53.956077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.219 17:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.219 17:29:53 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:24.219 17:29:53 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:24.219 17:29:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:24.219 17:29:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.220 17:29:53 -- nvmf/common.sh@521 -- # config=() 00:26:24.220 17:29:53 -- nvmf/common.sh@521 -- # local subsystem config 00:26:24.220 17:29:53 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.220 17:29:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:26:24.220 17:29:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:24.220 17:29:53 -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.220 17:29:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:26:24.220 { 00:26:24.220 "params": { 00:26:24.220 "name": "Nvme$subsystem", 00:26:24.220 "trtype": "$TEST_TRANSPORT", 00:26:24.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.220 "adrfam": "ipv4", 00:26:24.220 "trsvcid": "$NVMF_PORT", 00:26:24.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.220 "hdgst": ${hdgst:-false}, 00:26:24.220 "ddgst": ${ddgst:-false} 00:26:24.220 }, 00:26:24.220 "method": "bdev_nvme_attach_controller" 00:26:24.220 } 00:26:24.220 EOF 00:26:24.220 )") 00:26:24.220 17:29:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.220 17:29:53 -- target/dif.sh@54 -- # local file 00:26:24.220 17:29:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:24.220 17:29:53 -- target/dif.sh@56 -- # cat 00:26:24.220 17:29:53 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.220 17:29:53 -- common/autotest_common.sh@1327 -- # shift 00:26:24.220 17:29:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:24.220 17:29:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.220 17:29:53 -- nvmf/common.sh@543 -- # cat 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:24.220 17:29:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.220 17:29:53 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:24.220 17:29:53 -- nvmf/common.sh@545 -- # jq . 00:26:24.220 17:29:53 -- nvmf/common.sh@546 -- # IFS=, 00:26:24.220 17:29:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:26:24.220 "params": { 00:26:24.220 "name": "Nvme0", 00:26:24.220 "trtype": "tcp", 00:26:24.220 "traddr": "10.0.0.2", 00:26:24.220 "adrfam": "ipv4", 00:26:24.220 "trsvcid": "4420", 00:26:24.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.220 "hdgst": true, 00:26:24.220 "ddgst": true 00:26:24.220 }, 00:26:24.220 "method": "bdev_nvme_attach_controller" 00:26:24.220 }' 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:24.220 17:29:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:24.220 17:29:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:24.220 17:29:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:24.220 17:29:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:24.220 17:29:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:24.220 17:29:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:24.220 17:29:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.220 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:24.220 ... 00:26:24.220 fio-3.35 00:26:24.220 Starting 3 threads 00:26:36.423 00:26:36.423 filename0: (groupid=0, jobs=1): err= 0: pid=97682: Thu Apr 25 17:30:04 2024 00:26:36.423 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(315MiB/10003msec) 00:26:36.423 slat (nsec): min=6873, max=42147, avg=11932.01, stdev=3769.07 00:26:36.423 clat (usec): min=8879, max=16304, avg=11879.46, stdev=774.76 00:26:36.423 lat (usec): min=8890, max=16326, avg=11891.39, stdev=775.01 00:26:36.423 clat percentiles (usec): 00:26:36.423 | 1.00th=[10290], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:26:36.423 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:26:36.423 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:26:36.423 | 99.00th=[13960], 99.50th=[14222], 99.90th=[15401], 99.95th=[16319], 00:26:36.423 | 99.99th=[16319] 00:26:36.423 bw ( KiB/s): min=29952, max=33536, per=38.31%, avg=32279.53, stdev=994.97, samples=19 00:26:36.423 iops : min= 234, max= 262, avg=252.16, stdev= 7.78, samples=19 00:26:36.423 lat (msec) : 10=0.32%, 20=99.68% 00:26:36.423 cpu : usr=93.06%, sys=5.62%, ctx=23, majf=0, minf=0 00:26:36.423 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.423 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.423 filename0: (groupid=0, jobs=1): err= 0: pid=97683: Thu Apr 25 17:30:04 2024 00:26:36.423 read: IOPS=175, BW=21.9MiB/s (22.9MB/s)(219MiB/10004msec) 00:26:36.423 slat (usec): min=6, max=189, avg=12.82, stdev= 6.89 00:26:36.423 clat (usec): min=8163, max=21537, avg=17122.49, stdev=1024.77 00:26:36.423 lat (usec): min=8173, max=21554, avg=17135.31, stdev=1025.75 00:26:36.423 clat percentiles (usec): 00:26:36.423 | 1.00th=[15008], 5.00th=[15664], 10.00th=[15926], 20.00th=[16319], 00:26:36.423 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:26:36.423 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:26:36.423 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20317], 99.95th=[21627], 00:26:36.423 | 99.99th=[21627] 00:26:36.423 bw ( KiB/s): min=20480, max=23552, per=26.56%, avg=22379.79, stdev=745.46, samples=19 00:26:36.423 iops : min= 160, max= 184, avg=174.84, stdev= 5.82, samples=19 00:26:36.423 lat (msec) : 10=0.06%, 20=99.60%, 50=0.34% 00:26:36.423 cpu : usr=93.26%, sys=5.26%, ctx=136, majf=0, minf=9 00:26:36.423 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 issued rwts: total=1751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.423 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.423 filename0: (groupid=0, jobs=1): err= 0: pid=97684: Thu Apr 25 17:30:04 2024 00:26:36.423 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(289MiB/10004msec) 00:26:36.423 slat (nsec): min=6778, max=44021, avg=11514.27, stdev=4200.14 00:26:36.423 clat (usec): min=4051, max=17022, avg=12968.68, stdev=1032.15 00:26:36.423 lat (usec): min=4061, max=17033, avg=12980.19, stdev=1032.22 00:26:36.423 clat percentiles (usec): 00:26:36.423 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:26:36.423 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:26:36.423 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:26:36.423 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16581], 99.95th=[16909], 00:26:36.423 | 99.99th=[16909] 00:26:36.423 bw ( KiB/s): min=26880, max=30720, per=35.05%, avg=29534.32, stdev=977.47, samples=19 00:26:36.423 iops : min= 210, max= 240, avg=230.74, stdev= 7.64, samples=19 00:26:36.423 lat (msec) : 10=0.04%, 20=99.96% 00:26:36.423 cpu : usr=93.35%, sys=5.41%, ctx=7, majf=0, minf=0 00:26:36.423 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:36.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:36.423 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:36.423 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:36.423 00:26:36.423 Run status group 0 (all jobs): 00:26:36.423 READ: bw=82.3MiB/s (86.3MB/s), 21.9MiB/s-31.5MiB/s (22.9MB/s-33.1MB/s), io=823MiB (863MB), run=10003-10004msec 00:26:36.423 17:30:04 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:36.423 17:30:04 -- target/dif.sh@43 -- # local sub 00:26:36.423 17:30:04 -- target/dif.sh@45 -- # for sub in "$@" 00:26:36.423 17:30:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:36.423 17:30:04 -- target/dif.sh@36 -- # local sub_id=0 00:26:36.423 17:30:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:36.423 17:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.423 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:26:36.423 17:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.423 17:30:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:36.423 17:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:36.423 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:26:36.423 ************************************ 00:26:36.423 END TEST fio_dif_digest 00:26:36.423 ************************************ 00:26:36.423 17:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:36.423 00:26:36.423 real 0m10.877s 00:26:36.423 user 0m28.565s 00:26:36.423 sys 0m1.846s 00:26:36.423 17:30:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:36.423 17:30:04 -- common/autotest_common.sh@10 -- # set +x 00:26:36.423 17:30:04 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:36.423 17:30:04 -- target/dif.sh@147 -- # nvmftestfini 00:26:36.423 17:30:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:36.423 17:30:04 -- nvmf/common.sh@117 -- # sync 00:26:36.423 17:30:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:36.423 17:30:04 -- nvmf/common.sh@120 -- # set +e 00:26:36.423 17:30:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:36.423 17:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:36.423 rmmod nvme_tcp 00:26:36.423 rmmod nvme_fabrics 00:26:36.423 rmmod nvme_keyring 00:26:36.423 17:30:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:36.423 17:30:04 -- nvmf/common.sh@124 -- # set -e 00:26:36.423 17:30:04 -- nvmf/common.sh@125 -- # return 0 00:26:36.423 17:30:04 -- nvmf/common.sh@478 -- # '[' -n 96907 ']' 00:26:36.423 17:30:04 -- nvmf/common.sh@479 -- # killprocess 96907 00:26:36.423 17:30:04 -- common/autotest_common.sh@936 -- # '[' -z 96907 ']' 00:26:36.423 17:30:04 -- common/autotest_common.sh@940 -- # kill -0 96907 00:26:36.423 17:30:04 -- common/autotest_common.sh@941 -- # uname 00:26:36.423 17:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:36.423 17:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96907 00:26:36.423 killing process with pid 96907 00:26:36.423 17:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:36.423 17:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:36.423 17:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96907' 00:26:36.423 17:30:04 -- common/autotest_common.sh@955 -- # kill 96907 00:26:36.423 17:30:04 -- common/autotest_common.sh@960 -- # wait 96907 00:26:36.423 17:30:05 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:26:36.423 17:30:05 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:36.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:36.423 Waiting for block devices as requested 00:26:36.423 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:36.423 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:36.423 17:30:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:36.423 17:30:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:36.423 17:30:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.423 17:30:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.423 17:30:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.423 17:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:36.423 17:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.423 17:30:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:36.423 00:26:36.423 real 0m59.512s 00:26:36.423 user 3m51.618s 00:26:36.423 sys 0m14.197s 00:26:36.423 17:30:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:36.423 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:26:36.423 ************************************ 00:26:36.423 END TEST nvmf_dif 00:26:36.423 ************************************ 00:26:36.423 17:30:05 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:36.423 17:30:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:36.423 17:30:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.423 17:30:05 -- common/autotest_common.sh@10 -- # set +x 00:26:36.423 ************************************ 00:26:36.423 START TEST nvmf_abort_qd_sizes 00:26:36.423 ************************************ 00:26:36.424 17:30:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:36.424 * Looking for test storage... 00:26:36.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:36.424 17:30:05 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:36.424 17:30:05 -- nvmf/common.sh@7 -- # uname -s 00:26:36.424 17:30:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.424 17:30:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.424 17:30:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.424 17:30:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.424 17:30:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.424 17:30:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.424 17:30:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.424 17:30:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.424 17:30:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.424 17:30:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:26:36.424 17:30:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:26:36.424 17:30:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.424 17:30:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.424 17:30:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:36.424 17:30:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.424 17:30:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.424 17:30:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.424 17:30:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.424 17:30:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.424 17:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.424 17:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.424 17:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.424 17:30:05 -- paths/export.sh@5 -- # export PATH 00:26:36.424 17:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.424 17:30:05 -- nvmf/common.sh@47 -- # : 0 00:26:36.424 17:30:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:36.424 17:30:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:36.424 17:30:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.424 17:30:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.424 17:30:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.424 17:30:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:36.424 17:30:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:36.424 17:30:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:36.424 17:30:05 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:36.424 17:30:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:36.424 17:30:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.424 17:30:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:36.424 17:30:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:36.424 17:30:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:36.424 17:30:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.424 17:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:36.424 17:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.424 17:30:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:36.424 17:30:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:36.424 17:30:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.424 17:30:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.424 17:30:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:36.424 17:30:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:36.424 17:30:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:36.424 17:30:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:36.424 17:30:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:36.424 17:30:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.424 17:30:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:36.424 17:30:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:36.424 17:30:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:36.424 17:30:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:36.424 17:30:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:36.424 17:30:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:36.424 Cannot find device "nvmf_tgt_br" 00:26:36.424 17:30:05 -- nvmf/common.sh@155 -- # true 00:26:36.424 17:30:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:36.424 Cannot find device "nvmf_tgt_br2" 00:26:36.424 17:30:05 -- nvmf/common.sh@156 -- # true 00:26:36.424 17:30:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:36.424 17:30:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:36.424 Cannot find device "nvmf_tgt_br" 00:26:36.424 17:30:05 -- nvmf/common.sh@158 -- # true 00:26:36.424 17:30:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:36.424 Cannot find device "nvmf_tgt_br2" 00:26:36.424 17:30:05 -- nvmf/common.sh@159 -- # true 00:26:36.424 17:30:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:36.424 17:30:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:36.424 17:30:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:36.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:36.424 17:30:06 -- nvmf/common.sh@162 -- # true 00:26:36.424 17:30:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:36.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:36.424 17:30:06 -- nvmf/common.sh@163 -- # true 00:26:36.424 17:30:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:36.424 17:30:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:36.424 17:30:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:36.424 17:30:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:36.424 17:30:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:36.424 17:30:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:36.424 17:30:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:36.424 17:30:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:36.424 17:30:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:36.424 17:30:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:36.424 17:30:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:36.424 17:30:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:36.424 17:30:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:36.424 17:30:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:36.424 17:30:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:36.424 17:30:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:36.424 17:30:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:36.424 17:30:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:36.424 17:30:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:36.424 17:30:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:36.424 17:30:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:36.424 17:30:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:36.424 17:30:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:36.424 17:30:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:36.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:26:36.424 00:26:36.424 --- 10.0.0.2 ping statistics --- 00:26:36.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.424 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:26:36.424 17:30:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:36.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:36.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:26:36.424 00:26:36.424 --- 10.0.0.3 ping statistics --- 00:26:36.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.424 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:36.424 17:30:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:36.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:36.424 00:26:36.424 --- 10.0.0.1 ping statistics --- 00:26:36.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.424 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:36.424 17:30:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.424 17:30:06 -- nvmf/common.sh@422 -- # return 0 00:26:36.424 17:30:06 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:26:36.424 17:30:06 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:36.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:36.992 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.252 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.252 17:30:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.252 17:30:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:37.252 17:30:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:37.252 17:30:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.252 17:30:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:37.252 17:30:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:37.252 17:30:07 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:37.252 17:30:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:37.252 17:30:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:37.252 17:30:07 -- common/autotest_common.sh@10 -- # set +x 00:26:37.252 17:30:07 -- nvmf/common.sh@470 -- # nvmfpid=98276 00:26:37.252 17:30:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:37.252 17:30:07 -- nvmf/common.sh@471 -- # waitforlisten 98276 00:26:37.252 17:30:07 -- common/autotest_common.sh@817 -- # '[' -z 98276 ']' 00:26:37.252 17:30:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.252 17:30:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:37.252 17:30:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.252 17:30:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:37.252 17:30:07 -- common/autotest_common.sh@10 -- # set +x 00:26:37.252 [2024-04-25 17:30:07.166298] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:26:37.252 [2024-04-25 17:30:07.166390] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.530 [2024-04-25 17:30:07.308153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.530 [2024-04-25 17:30:07.381435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.530 [2024-04-25 17:30:07.381766] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.530 [2024-04-25 17:30:07.382023] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.530 [2024-04-25 17:30:07.382289] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.530 [2024-04-25 17:30:07.382414] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.530 [2024-04-25 17:30:07.382759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.530 [2024-04-25 17:30:07.382853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.530 [2024-04-25 17:30:07.382932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.530 [2024-04-25 17:30:07.382931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.473 17:30:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:38.473 17:30:08 -- common/autotest_common.sh@850 -- # return 0 00:26:38.473 17:30:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:38.473 17:30:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:38.473 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.473 17:30:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.473 17:30:08 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:38.473 17:30:08 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:38.473 17:30:08 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:38.473 17:30:08 -- scripts/common.sh@309 -- # local bdf bdfs 00:26:38.473 17:30:08 -- scripts/common.sh@310 -- # local nvmes 00:26:38.473 17:30:08 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:26:38.473 17:30:08 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:38.473 17:30:08 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:26:38.473 17:30:08 -- scripts/common.sh@295 -- # local bdf= 00:26:38.473 17:30:08 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:26:38.473 17:30:08 -- scripts/common.sh@230 -- # local class 00:26:38.473 17:30:08 -- scripts/common.sh@231 -- # local subclass 00:26:38.473 17:30:08 -- scripts/common.sh@232 -- # local progif 00:26:38.473 17:30:08 -- scripts/common.sh@233 -- # printf %02x 1 00:26:38.473 17:30:08 -- scripts/common.sh@233 -- # class=01 00:26:38.473 17:30:08 -- scripts/common.sh@234 -- # printf %02x 8 00:26:38.473 17:30:08 -- scripts/common.sh@234 -- # subclass=08 00:26:38.473 17:30:08 -- scripts/common.sh@235 -- # printf %02x 2 00:26:38.473 17:30:08 -- scripts/common.sh@235 -- # progif=02 00:26:38.473 17:30:08 -- scripts/common.sh@237 -- # hash lspci 00:26:38.473 17:30:08 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:26:38.473 17:30:08 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:26:38.474 17:30:08 -- scripts/common.sh@240 -- # grep -i -- -p02 00:26:38.474 17:30:08 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:38.474 17:30:08 -- scripts/common.sh@242 -- # tr -d '"' 00:26:38.474 17:30:08 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:38.474 17:30:08 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:26:38.474 17:30:08 -- scripts/common.sh@15 -- # local i 00:26:38.474 17:30:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:26:38.474 17:30:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:38.474 17:30:08 -- scripts/common.sh@24 -- # return 0 00:26:38.474 17:30:08 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:26:38.474 17:30:08 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:38.474 17:30:08 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:26:38.474 17:30:08 -- scripts/common.sh@15 -- # local i 00:26:38.474 17:30:08 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:26:38.474 17:30:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:38.474 17:30:08 -- scripts/common.sh@24 -- # return 0 00:26:38.474 17:30:08 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:26:38.474 17:30:08 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:38.474 17:30:08 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:26:38.474 17:30:08 -- scripts/common.sh@320 -- # uname -s 00:26:38.474 17:30:08 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:38.474 17:30:08 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:38.474 17:30:08 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:38.474 17:30:08 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:26:38.474 17:30:08 -- scripts/common.sh@320 -- # uname -s 00:26:38.474 17:30:08 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:38.474 17:30:08 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:38.474 17:30:08 -- scripts/common.sh@325 -- # (( 2 )) 00:26:38.474 17:30:08 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:38.474 17:30:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:38.474 17:30:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 ************************************ 00:26:38.474 START TEST spdk_target_abort 00:26:38.474 ************************************ 00:26:38.474 17:30:08 -- common/autotest_common.sh@1111 -- # spdk_target 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:26:38.474 17:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 spdk_targetn1 00:26:38.474 17:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.474 17:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 [2024-04-25 17:30:08.389507] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.474 17:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:38.474 17:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 17:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:38.474 17:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 17:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:38.474 17:30:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.474 17:30:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.474 [2024-04-25 17:30:08.417659] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.474 17:30:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:38.474 17:30:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:41.757 Initializing NVMe Controllers 00:26:41.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:41.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:41.757 Initialization complete. Launching workers. 00:26:41.757 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10341, failed: 0 00:26:41.757 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1089, failed to submit 9252 00:26:41.757 success 766, unsuccess 323, failed 0 00:26:41.757 17:30:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:41.757 17:30:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:45.039 Initializing NVMe Controllers 00:26:45.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:45.039 Initialization complete. Launching workers. 00:26:45.039 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:26:45.039 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 4732 00:26:45.039 success 259, unsuccess 983, failed 0 00:26:45.039 17:30:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:45.039 17:30:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:48.324 Initializing NVMe Controllers 00:26:48.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:48.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:48.324 Initialization complete. Launching workers. 00:26:48.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30190, failed: 0 00:26:48.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2736, failed to submit 27454 00:26:48.324 success 431, unsuccess 2305, failed 0 00:26:48.324 17:30:18 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:48.324 17:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.324 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.324 17:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.324 17:30:18 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:48.324 17:30:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.324 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.582 17:30:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.582 17:30:18 -- target/abort_qd_sizes.sh@61 -- # killprocess 98276 00:26:48.582 17:30:18 -- common/autotest_common.sh@936 -- # '[' -z 98276 ']' 00:26:48.582 17:30:18 -- common/autotest_common.sh@940 -- # kill -0 98276 00:26:48.582 17:30:18 -- common/autotest_common.sh@941 -- # uname 00:26:48.582 17:30:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:48.582 17:30:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98276 00:26:48.582 killing process with pid 98276 00:26:48.582 17:30:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:48.582 17:30:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:48.582 17:30:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98276' 00:26:48.582 17:30:18 -- common/autotest_common.sh@955 -- # kill 98276 00:26:48.582 17:30:18 -- common/autotest_common.sh@960 -- # wait 98276 00:26:48.841 00:26:48.841 real 0m10.417s 00:26:48.841 user 0m42.928s 00:26:48.841 sys 0m1.632s 00:26:48.841 ************************************ 00:26:48.841 END TEST spdk_target_abort 00:26:48.841 ************************************ 00:26:48.841 17:30:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:48.841 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.841 17:30:18 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:48.841 17:30:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:48.841 17:30:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.841 17:30:18 -- common/autotest_common.sh@10 -- # set +x 00:26:49.100 ************************************ 00:26:49.100 START TEST kernel_target_abort 00:26:49.100 ************************************ 00:26:49.100 17:30:18 -- common/autotest_common.sh@1111 -- # kernel_target 00:26:49.100 17:30:18 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:49.100 17:30:18 -- nvmf/common.sh@717 -- # local ip 00:26:49.100 17:30:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:49.100 17:30:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:49.100 17:30:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.100 17:30:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.100 17:30:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:49.100 17:30:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.100 17:30:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:49.100 17:30:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:49.100 17:30:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:49.100 17:30:18 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:49.100 17:30:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:49.100 17:30:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:49.100 17:30:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.100 17:30:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.100 17:30:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:49.100 17:30:18 -- nvmf/common.sh@628 -- # local block nvme 00:26:49.100 17:30:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:49.100 17:30:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:49.100 17:30:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:49.100 17:30:18 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:49.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:49.359 Waiting for block devices as requested 00:26:49.359 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.617 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.617 17:30:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:49.617 17:30:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:49.617 17:30:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:49.617 17:30:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:49.617 17:30:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:49.617 17:30:19 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:49.617 17:30:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:49.617 No valid GPT data, bailing 00:26:49.617 17:30:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:49.617 17:30:19 -- scripts/common.sh@391 -- # pt= 00:26:49.617 17:30:19 -- scripts/common.sh@392 -- # return 1 00:26:49.617 17:30:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:49.617 17:30:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:49.617 17:30:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:26:49.617 17:30:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:49.617 17:30:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:49.617 17:30:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:26:49.617 17:30:19 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:49.617 17:30:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:49.617 No valid GPT data, bailing 00:26:49.617 17:30:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:49.617 17:30:19 -- scripts/common.sh@391 -- # pt= 00:26:49.617 17:30:19 -- scripts/common.sh@392 -- # return 1 00:26:49.617 17:30:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:26:49.617 17:30:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:49.617 17:30:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:26:49.617 17:30:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:49.617 17:30:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:49.617 17:30:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:49.617 17:30:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:26:49.617 17:30:19 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:49.617 17:30:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:49.876 No valid GPT data, bailing 00:26:49.876 17:30:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:49.876 17:30:19 -- scripts/common.sh@391 -- # pt= 00:26:49.876 17:30:19 -- scripts/common.sh@392 -- # return 1 00:26:49.876 17:30:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:26:49.876 17:30:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:49.876 17:30:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:49.876 17:30:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:26:49.876 17:30:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:49.876 17:30:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:49.876 17:30:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:49.876 17:30:19 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:26:49.876 17:30:19 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:49.876 17:30:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:49.876 No valid GPT data, bailing 00:26:49.876 17:30:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:49.876 17:30:19 -- scripts/common.sh@391 -- # pt= 00:26:49.876 17:30:19 -- scripts/common.sh@392 -- # return 1 00:26:49.876 17:30:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:26:49.876 17:30:19 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:26:49.876 17:30:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.876 17:30:19 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.877 17:30:19 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:49.877 17:30:19 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:49.877 17:30:19 -- nvmf/common.sh@656 -- # echo 1 00:26:49.877 17:30:19 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:26:49.877 17:30:19 -- nvmf/common.sh@658 -- # echo 1 00:26:49.877 17:30:19 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:49.877 17:30:19 -- nvmf/common.sh@661 -- # echo tcp 00:26:49.877 17:30:19 -- nvmf/common.sh@662 -- # echo 4420 00:26:49.877 17:30:19 -- nvmf/common.sh@663 -- # echo ipv4 00:26:49.877 17:30:19 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:49.877 17:30:19 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 --hostid=d437ca2b-3c98-42cb-bea8-20b3689b86e7 -a 10.0.0.1 -t tcp -s 4420 00:26:49.877 00:26:49.877 Discovery Log Number of Records 2, Generation counter 2 00:26:49.877 =====Discovery Log Entry 0====== 00:26:49.877 trtype: tcp 00:26:49.877 adrfam: ipv4 00:26:49.877 subtype: current discovery subsystem 00:26:49.877 treq: not specified, sq flow control disable supported 00:26:49.877 portid: 1 00:26:49.877 trsvcid: 4420 00:26:49.877 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:49.877 traddr: 10.0.0.1 00:26:49.877 eflags: none 00:26:49.877 sectype: none 00:26:49.877 =====Discovery Log Entry 1====== 00:26:49.877 trtype: tcp 00:26:49.877 adrfam: ipv4 00:26:49.877 subtype: nvme subsystem 00:26:49.877 treq: not specified, sq flow control disable supported 00:26:49.877 portid: 1 00:26:49.877 trsvcid: 4420 00:26:49.877 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:49.877 traddr: 10.0.0.1 00:26:49.877 eflags: none 00:26:49.877 sectype: none 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.877 17:30:19 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.160 Initializing NVMe Controllers 00:26:53.160 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.160 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:53.160 Initialization complete. Launching workers. 00:26:53.160 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32236, failed: 0 00:26:53.160 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32236, failed to submit 0 00:26:53.160 success 0, unsuccess 32236, failed 0 00:26:53.160 17:30:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.160 17:30:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.446 Initializing NVMe Controllers 00:26:56.446 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.446 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:56.446 Initialization complete. Launching workers. 00:26:56.446 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62013, failed: 0 00:26:56.446 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25048, failed to submit 36965 00:26:56.446 success 0, unsuccess 25048, failed 0 00:26:56.446 17:30:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.446 17:30:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.736 Initializing NVMe Controllers 00:26:59.736 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:59.736 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:59.736 Initialization complete. Launching workers. 00:26:59.737 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69600, failed: 0 00:26:59.737 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17418, failed to submit 52182 00:26:59.737 success 0, unsuccess 17418, failed 0 00:26:59.737 17:30:29 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:59.737 17:30:29 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:59.737 17:30:29 -- nvmf/common.sh@675 -- # echo 0 00:26:59.737 17:30:29 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.737 17:30:29 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.737 17:30:29 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.737 17:30:29 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.737 17:30:29 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.737 17:30:29 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:59.737 17:30:29 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:59.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.934 ************************************ 00:27:00.934 END TEST kernel_target_abort 00:27:00.934 ************************************ 00:27:00.934 00:27:00.934 real 0m11.879s 00:27:00.934 user 0m5.570s 00:27:00.934 sys 0m3.643s 00:27:00.934 17:30:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:00.934 17:30:30 -- common/autotest_common.sh@10 -- # set +x 00:27:00.934 17:30:30 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:00.934 17:30:30 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:00.934 17:30:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:00.934 17:30:30 -- nvmf/common.sh@117 -- # sync 00:27:00.934 17:30:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.934 17:30:30 -- nvmf/common.sh@120 -- # set +e 00:27:00.934 17:30:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.934 17:30:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.934 rmmod nvme_tcp 00:27:00.934 rmmod nvme_fabrics 00:27:00.934 rmmod nvme_keyring 00:27:00.934 17:30:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.934 Process with pid 98276 is not found 00:27:00.934 17:30:30 -- nvmf/common.sh@124 -- # set -e 00:27:00.934 17:30:30 -- nvmf/common.sh@125 -- # return 0 00:27:00.934 17:30:30 -- nvmf/common.sh@478 -- # '[' -n 98276 ']' 00:27:00.934 17:30:30 -- nvmf/common.sh@479 -- # killprocess 98276 00:27:00.934 17:30:30 -- common/autotest_common.sh@936 -- # '[' -z 98276 ']' 00:27:00.934 17:30:30 -- common/autotest_common.sh@940 -- # kill -0 98276 00:27:00.934 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (98276) - No such process 00:27:00.934 17:30:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 98276 is not found' 00:27:00.934 17:30:30 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:27:00.934 17:30:30 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:01.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:01.503 Waiting for block devices as requested 00:27:01.503 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.503 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.503 17:30:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:01.503 17:30:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:01.503 17:30:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.503 17:30:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.503 17:30:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.503 17:30:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:01.503 17:30:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.503 17:30:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:01.503 00:27:01.503 real 0m25.670s 00:27:01.503 user 0m49.757s 00:27:01.503 sys 0m6.678s 00:27:01.503 17:30:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:01.503 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:27:01.503 ************************************ 00:27:01.503 END TEST nvmf_abort_qd_sizes 00:27:01.503 ************************************ 00:27:01.763 17:30:31 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:01.763 17:30:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.763 17:30:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.763 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:27:01.763 ************************************ 00:27:01.763 START TEST keyring_file 00:27:01.763 ************************************ 00:27:01.763 17:30:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:01.763 * Looking for test storage... 00:27:01.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:01.763 17:30:31 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:01.763 17:30:31 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:01.763 17:30:31 -- nvmf/common.sh@7 -- # uname -s 00:27:01.763 17:30:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.763 17:30:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.763 17:30:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.763 17:30:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.763 17:30:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.763 17:30:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.763 17:30:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.763 17:30:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.763 17:30:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.763 17:30:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.763 17:30:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:27:01.763 17:30:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=d437ca2b-3c98-42cb-bea8-20b3689b86e7 00:27:01.763 17:30:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.763 17:30:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.763 17:30:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:01.763 17:30:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.763 17:30:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.763 17:30:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.763 17:30:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.763 17:30:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.763 17:30:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.763 17:30:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.763 17:30:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.763 17:30:31 -- paths/export.sh@5 -- # export PATH 00:27:01.763 17:30:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.763 17:30:31 -- nvmf/common.sh@47 -- # : 0 00:27:01.763 17:30:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.763 17:30:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.763 17:30:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.763 17:30:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.763 17:30:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.763 17:30:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.763 17:30:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.763 17:30:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.763 17:30:31 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:01.763 17:30:31 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:01.763 17:30:31 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:01.763 17:30:31 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:01.763 17:30:31 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:01.763 17:30:31 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:01.763 17:30:31 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:01.763 17:30:31 -- keyring/common.sh@15 -- # local name key digest path 00:27:01.763 17:30:31 -- keyring/common.sh@17 -- # name=key0 00:27:01.763 17:30:31 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:01.763 17:30:31 -- keyring/common.sh@17 -- # digest=0 00:27:01.763 17:30:31 -- keyring/common.sh@18 -- # mktemp 00:27:01.763 17:30:31 -- keyring/common.sh@18 -- # path=/tmp/tmp.5QJc1TCKXl 00:27:01.763 17:30:31 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:01.763 17:30:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:01.763 17:30:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:01.763 17:30:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:01.763 17:30:31 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:27:01.763 17:30:31 -- nvmf/common.sh@693 -- # digest=0 00:27:01.764 17:30:31 -- nvmf/common.sh@694 -- # python - 00:27:02.023 17:30:31 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5QJc1TCKXl 00:27:02.023 17:30:31 -- keyring/common.sh@23 -- # echo /tmp/tmp.5QJc1TCKXl 00:27:02.023 17:30:31 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5QJc1TCKXl 00:27:02.023 17:30:31 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:02.023 17:30:31 -- keyring/common.sh@15 -- # local name key digest path 00:27:02.023 17:30:31 -- keyring/common.sh@17 -- # name=key1 00:27:02.023 17:30:31 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:02.023 17:30:31 -- keyring/common.sh@17 -- # digest=0 00:27:02.023 17:30:31 -- keyring/common.sh@18 -- # mktemp 00:27:02.023 17:30:31 -- keyring/common.sh@18 -- # path=/tmp/tmp.ambGVJccDd 00:27:02.023 17:30:31 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:02.023 17:30:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:02.023 17:30:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:02.023 17:30:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:02.023 17:30:31 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:27:02.023 17:30:31 -- nvmf/common.sh@693 -- # digest=0 00:27:02.023 17:30:31 -- nvmf/common.sh@694 -- # python - 00:27:02.023 17:30:31 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ambGVJccDd 00:27:02.023 17:30:31 -- keyring/common.sh@23 -- # echo /tmp/tmp.ambGVJccDd 00:27:02.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.023 17:30:31 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ambGVJccDd 00:27:02.023 17:30:31 -- keyring/file.sh@30 -- # tgtpid=99162 00:27:02.023 17:30:31 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:02.023 17:30:31 -- keyring/file.sh@32 -- # waitforlisten 99162 00:27:02.023 17:30:31 -- common/autotest_common.sh@817 -- # '[' -z 99162 ']' 00:27:02.023 17:30:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.023 17:30:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:02.023 17:30:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.023 17:30:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:02.023 17:30:31 -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 [2024-04-25 17:30:31.879449] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:27:02.024 [2024-04-25 17:30:31.879734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99162 ] 00:27:02.283 [2024-04-25 17:30:32.015088] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.283 [2024-04-25 17:30:32.085605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.247 17:30:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:03.247 17:30:32 -- common/autotest_common.sh@850 -- # return 0 00:27:03.247 17:30:32 -- keyring/file.sh@33 -- # rpc_cmd 00:27:03.247 17:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.247 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:27:03.247 [2024-04-25 17:30:32.861290] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.247 null0 00:27:03.247 [2024-04-25 17:30:32.893269] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:03.247 [2024-04-25 17:30:32.893470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:03.247 [2024-04-25 17:30:32.901258] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:03.247 17:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.247 17:30:32 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:03.247 17:30:32 -- common/autotest_common.sh@638 -- # local es=0 00:27:03.248 17:30:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:03.248 17:30:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:03.248 17:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.248 17:30:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:03.248 17:30:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.248 17:30:32 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:03.248 17:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.248 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:27:03.248 [2024-04-25 17:30:32.917242] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/25 17:30:32 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:03.248 request: 00:27:03.248 { 00:27:03.248 "method": "nvmf_subsystem_add_listener", 00:27:03.248 "params": { 00:27:03.248 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:03.248 "secure_channel": false, 00:27:03.248 "listen_address": { 00:27:03.248 "trtype": "tcp", 00:27:03.248 "traddr": "127.0.0.1", 00:27:03.248 "trsvcid": "4420" 00:27:03.248 } 00:27:03.248 } 00:27:03.248 } 00:27:03.248 Got JSON-RPC error response 00:27:03.248 GoRPCClient: error on JSON-RPC call 00:27:03.248 17:30:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:03.248 17:30:32 -- common/autotest_common.sh@641 -- # es=1 00:27:03.248 17:30:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:03.248 17:30:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:03.248 17:30:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:03.248 17:30:32 -- keyring/file.sh@46 -- # bperfpid=99197 00:27:03.248 17:30:32 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:03.248 17:30:32 -- keyring/file.sh@48 -- # waitforlisten 99197 /var/tmp/bperf.sock 00:27:03.248 17:30:32 -- common/autotest_common.sh@817 -- # '[' -z 99197 ']' 00:27:03.248 17:30:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.248 17:30:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:03.248 17:30:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.248 17:30:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:03.248 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:27:03.248 [2024-04-25 17:30:32.977881] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:27:03.248 [2024-04-25 17:30:32.978162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99197 ] 00:27:03.248 [2024-04-25 17:30:33.115222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.248 [2024-04-25 17:30:33.183800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.189 17:30:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:04.189 17:30:33 -- common/autotest_common.sh@850 -- # return 0 00:27:04.189 17:30:33 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:04.189 17:30:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:04.189 17:30:34 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ambGVJccDd 00:27:04.189 17:30:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ambGVJccDd 00:27:04.447 17:30:34 -- keyring/file.sh@51 -- # get_key key0 00:27:04.447 17:30:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.447 17:30:34 -- keyring/file.sh@51 -- # jq -r .path 00:27:04.447 17:30:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.447 17:30:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.706 17:30:34 -- keyring/file.sh@51 -- # [[ /tmp/tmp.5QJc1TCKXl == \/\t\m\p\/\t\m\p\.\5\Q\J\c\1\T\C\K\X\l ]] 00:27:04.706 17:30:34 -- keyring/file.sh@52 -- # get_key key1 00:27:04.706 17:30:34 -- keyring/file.sh@52 -- # jq -r .path 00:27:04.706 17:30:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.706 17:30:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.706 17:30:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:04.965 17:30:34 -- keyring/file.sh@52 -- # [[ /tmp/tmp.ambGVJccDd == \/\t\m\p\/\t\m\p\.\a\m\b\G\V\J\c\c\D\d ]] 00:27:04.965 17:30:34 -- keyring/file.sh@53 -- # get_refcnt key0 00:27:04.965 17:30:34 -- keyring/common.sh@12 -- # get_key key0 00:27:04.965 17:30:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:04.965 17:30:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.965 17:30:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.965 17:30:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.223 17:30:34 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:05.223 17:30:34 -- keyring/file.sh@54 -- # get_refcnt key1 00:27:05.223 17:30:34 -- keyring/common.sh@12 -- # get_key key1 00:27:05.223 17:30:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.223 17:30:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:05.223 17:30:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.223 17:30:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.482 17:30:35 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:05.482 17:30:35 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.482 17:30:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.741 [2024-04-25 17:30:35.496610] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:05.741 nvme0n1 00:27:05.741 17:30:35 -- keyring/file.sh@59 -- # get_refcnt key0 00:27:05.741 17:30:35 -- keyring/common.sh@12 -- # get_key key0 00:27:05.741 17:30:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.741 17:30:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.741 17:30:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.741 17:30:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.000 17:30:35 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:06.000 17:30:35 -- keyring/file.sh@60 -- # get_refcnt key1 00:27:06.000 17:30:35 -- keyring/common.sh@12 -- # get_key key1 00:27:06.000 17:30:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:06.000 17:30:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.000 17:30:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.000 17:30:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:06.259 17:30:36 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:06.259 17:30:36 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.259 Running I/O for 1 seconds... 00:27:07.197 00:27:07.197 Latency(us) 00:27:07.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.197 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:07.197 nvme0n1 : 1.01 13956.64 54.52 0.00 0.00 9142.98 3634.27 14298.76 00:27:07.197 =================================================================================================================== 00:27:07.197 Total : 13956.64 54.52 0.00 0.00 9142.98 3634.27 14298.76 00:27:07.197 0 00:27:07.197 17:30:37 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:07.197 17:30:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:07.455 17:30:37 -- keyring/file.sh@65 -- # get_refcnt key0 00:27:07.455 17:30:37 -- keyring/common.sh@12 -- # get_key key0 00:27:07.455 17:30:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:07.455 17:30:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.455 17:30:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.455 17:30:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:07.715 17:30:37 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:07.715 17:30:37 -- keyring/file.sh@66 -- # get_refcnt key1 00:27:07.715 17:30:37 -- keyring/common.sh@12 -- # get_key key1 00:27:07.715 17:30:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:07.715 17:30:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.715 17:30:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.715 17:30:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:07.974 17:30:37 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:07.974 17:30:37 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:07.974 17:30:37 -- common/autotest_common.sh@638 -- # local es=0 00:27:07.974 17:30:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:07.974 17:30:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:07.974 17:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:07.974 17:30:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:07.974 17:30:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:07.974 17:30:37 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:07.974 17:30:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:08.233 [2024-04-25 17:30:37.999488] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:08.233 [2024-04-25 17:30:38.000197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1320920 (107): Transport endpoint is not connected 00:27:08.233 [2024-04-25 17:30:38.001186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1320920 (9): Bad file descriptor 00:27:08.233 [2024-04-25 17:30:38.002183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:08.233 [2024-04-25 17:30:38.002216] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:08.233 [2024-04-25 17:30:38.002225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:08.233 2024/04/25 17:30:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:27:08.233 request: 00:27:08.233 { 00:27:08.233 "method": "bdev_nvme_attach_controller", 00:27:08.233 "params": { 00:27:08.233 "name": "nvme0", 00:27:08.233 "trtype": "tcp", 00:27:08.233 "traddr": "127.0.0.1", 00:27:08.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.233 "adrfam": "ipv4", 00:27:08.233 "trsvcid": "4420", 00:27:08.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.233 "psk": "key1" 00:27:08.233 } 00:27:08.233 } 00:27:08.233 Got JSON-RPC error response 00:27:08.233 GoRPCClient: error on JSON-RPC call 00:27:08.233 17:30:38 -- common/autotest_common.sh@641 -- # es=1 00:27:08.233 17:30:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:08.233 17:30:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:08.233 17:30:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:08.233 17:30:38 -- keyring/file.sh@71 -- # get_refcnt key0 00:27:08.233 17:30:38 -- keyring/common.sh@12 -- # get_key key0 00:27:08.233 17:30:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.233 17:30:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.233 17:30:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.233 17:30:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:08.492 17:30:38 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:08.492 17:30:38 -- keyring/file.sh@72 -- # get_refcnt key1 00:27:08.492 17:30:38 -- keyring/common.sh@12 -- # get_key key1 00:27:08.492 17:30:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.492 17:30:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.492 17:30:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.492 17:30:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:08.751 17:30:38 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:08.752 17:30:38 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:08.752 17:30:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:08.752 17:30:38 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:08.752 17:30:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:09.010 17:30:38 -- keyring/file.sh@77 -- # jq length 00:27:09.010 17:30:38 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:09.010 17:30:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.270 17:30:39 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:09.270 17:30:39 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5QJc1TCKXl 00:27:09.270 17:30:39 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.270 17:30:39 -- common/autotest_common.sh@638 -- # local es=0 00:27:09.270 17:30:39 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.270 17:30:39 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:09.270 17:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:09.270 17:30:39 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:09.270 17:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:09.270 17:30:39 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.270 17:30:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.530 [2024-04-25 17:30:39.416559] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5QJc1TCKXl': 0100660 00:27:09.530 [2024-04-25 17:30:39.416607] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:09.530 2024/04/25 17:30:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.5QJc1TCKXl], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:27:09.530 request: 00:27:09.530 { 00:27:09.530 "method": "keyring_file_add_key", 00:27:09.530 "params": { 00:27:09.530 "name": "key0", 00:27:09.530 "path": "/tmp/tmp.5QJc1TCKXl" 00:27:09.530 } 00:27:09.530 } 00:27:09.530 Got JSON-RPC error response 00:27:09.530 GoRPCClient: error on JSON-RPC call 00:27:09.530 17:30:39 -- common/autotest_common.sh@641 -- # es=1 00:27:09.530 17:30:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:09.530 17:30:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:09.530 17:30:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:09.530 17:30:39 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5QJc1TCKXl 00:27:09.530 17:30:39 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.530 17:30:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5QJc1TCKXl 00:27:09.789 17:30:39 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5QJc1TCKXl 00:27:09.789 17:30:39 -- keyring/file.sh@88 -- # get_refcnt key0 00:27:09.789 17:30:39 -- keyring/common.sh@12 -- # get_key key0 00:27:09.789 17:30:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.789 17:30:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.789 17:30:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.789 17:30:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:10.048 17:30:39 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:10.048 17:30:39 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.048 17:30:39 -- common/autotest_common.sh@638 -- # local es=0 00:27:10.048 17:30:39 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.048 17:30:39 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:10.048 17:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:10.048 17:30:39 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:10.048 17:30:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:10.048 17:30:39 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.048 17:30:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.306 [2024-04-25 17:30:40.120760] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5QJc1TCKXl': No such file or directory 00:27:10.306 [2024-04-25 17:30:40.120806] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:10.306 [2024-04-25 17:30:40.120831] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:10.306 [2024-04-25 17:30:40.120840] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.306 [2024-04-25 17:30:40.120849] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:10.306 2024/04/25 17:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:27:10.306 request: 00:27:10.306 { 00:27:10.306 "method": "bdev_nvme_attach_controller", 00:27:10.306 "params": { 00:27:10.306 "name": "nvme0", 00:27:10.306 "trtype": "tcp", 00:27:10.306 "traddr": "127.0.0.1", 00:27:10.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.306 "adrfam": "ipv4", 00:27:10.306 "trsvcid": "4420", 00:27:10.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.306 "psk": "key0" 00:27:10.306 } 00:27:10.306 } 00:27:10.306 Got JSON-RPC error response 00:27:10.306 GoRPCClient: error on JSON-RPC call 00:27:10.306 17:30:40 -- common/autotest_common.sh@641 -- # es=1 00:27:10.306 17:30:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:10.306 17:30:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:10.306 17:30:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:10.306 17:30:40 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:10.306 17:30:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:10.564 17:30:40 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:10.564 17:30:40 -- keyring/common.sh@15 -- # local name key digest path 00:27:10.564 17:30:40 -- keyring/common.sh@17 -- # name=key0 00:27:10.564 17:30:40 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:10.564 17:30:40 -- keyring/common.sh@17 -- # digest=0 00:27:10.564 17:30:40 -- keyring/common.sh@18 -- # mktemp 00:27:10.564 17:30:40 -- keyring/common.sh@18 -- # path=/tmp/tmp.SNW1BsvmjP 00:27:10.564 17:30:40 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:10.564 17:30:40 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:10.564 17:30:40 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:10.564 17:30:40 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:10.564 17:30:40 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:27:10.564 17:30:40 -- nvmf/common.sh@693 -- # digest=0 00:27:10.564 17:30:40 -- nvmf/common.sh@694 -- # python - 00:27:10.564 17:30:40 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SNW1BsvmjP 00:27:10.564 17:30:40 -- keyring/common.sh@23 -- # echo /tmp/tmp.SNW1BsvmjP 00:27:10.564 17:30:40 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SNW1BsvmjP 00:27:10.564 17:30:40 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SNW1BsvmjP 00:27:10.565 17:30:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SNW1BsvmjP 00:27:10.825 17:30:40 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.825 17:30:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:11.084 nvme0n1 00:27:11.084 17:30:40 -- keyring/file.sh@99 -- # get_refcnt key0 00:27:11.084 17:30:40 -- keyring/common.sh@12 -- # get_key key0 00:27:11.084 17:30:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.084 17:30:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.084 17:30:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.084 17:30:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:11.342 17:30:41 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:11.342 17:30:41 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:11.342 17:30:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:11.600 17:30:41 -- keyring/file.sh@101 -- # get_key key0 00:27:11.600 17:30:41 -- keyring/file.sh@101 -- # jq -r .removed 00:27:11.600 17:30:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:11.600 17:30:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.600 17:30:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.859 17:30:41 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:11.859 17:30:41 -- keyring/file.sh@102 -- # get_refcnt key0 00:27:11.859 17:30:41 -- keyring/common.sh@12 -- # get_key key0 00:27:11.859 17:30:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.859 17:30:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.859 17:30:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.859 17:30:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:12.118 17:30:41 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:12.118 17:30:41 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:12.118 17:30:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:12.118 17:30:42 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:12.118 17:30:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.118 17:30:42 -- keyring/file.sh@104 -- # jq length 00:27:12.376 17:30:42 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:12.376 17:30:42 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SNW1BsvmjP 00:27:12.376 17:30:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SNW1BsvmjP 00:27:12.634 17:30:42 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ambGVJccDd 00:27:12.634 17:30:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ambGVJccDd 00:27:12.893 17:30:42 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:12.893 17:30:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.152 nvme0n1 00:27:13.152 17:30:43 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:13.152 17:30:43 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:13.411 17:30:43 -- keyring/file.sh@112 -- # config='{ 00:27:13.411 "subsystems": [ 00:27:13.411 { 00:27:13.411 "subsystem": "keyring", 00:27:13.411 "config": [ 00:27:13.411 { 00:27:13.411 "method": "keyring_file_add_key", 00:27:13.411 "params": { 00:27:13.411 "name": "key0", 00:27:13.411 "path": "/tmp/tmp.SNW1BsvmjP" 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "keyring_file_add_key", 00:27:13.411 "params": { 00:27:13.411 "name": "key1", 00:27:13.411 "path": "/tmp/tmp.ambGVJccDd" 00:27:13.411 } 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "iobuf", 00:27:13.411 "config": [ 00:27:13.411 { 00:27:13.411 "method": "iobuf_set_options", 00:27:13.411 "params": { 00:27:13.411 "large_bufsize": 135168, 00:27:13.411 "large_pool_count": 1024, 00:27:13.411 "small_bufsize": 8192, 00:27:13.411 "small_pool_count": 8192 00:27:13.411 } 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "sock", 00:27:13.411 "config": [ 00:27:13.411 { 00:27:13.411 "method": "sock_impl_set_options", 00:27:13.411 "params": { 00:27:13.411 "enable_ktls": false, 00:27:13.411 "enable_placement_id": 0, 00:27:13.411 "enable_quickack": false, 00:27:13.411 "enable_recv_pipe": true, 00:27:13.411 "enable_zerocopy_send_client": false, 00:27:13.411 "enable_zerocopy_send_server": true, 00:27:13.411 "impl_name": "posix", 00:27:13.411 "recv_buf_size": 2097152, 00:27:13.411 "send_buf_size": 2097152, 00:27:13.411 "tls_version": 0, 00:27:13.411 "zerocopy_threshold": 0 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "sock_impl_set_options", 00:27:13.411 "params": { 00:27:13.411 "enable_ktls": false, 00:27:13.411 "enable_placement_id": 0, 00:27:13.411 "enable_quickack": false, 00:27:13.411 "enable_recv_pipe": true, 00:27:13.411 "enable_zerocopy_send_client": false, 00:27:13.411 "enable_zerocopy_send_server": true, 00:27:13.411 "impl_name": "ssl", 00:27:13.411 "recv_buf_size": 4096, 00:27:13.411 "send_buf_size": 4096, 00:27:13.411 "tls_version": 0, 00:27:13.411 "zerocopy_threshold": 0 00:27:13.411 } 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "vmd", 00:27:13.411 "config": [] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "accel", 00:27:13.411 "config": [ 00:27:13.411 { 00:27:13.411 "method": "accel_set_options", 00:27:13.411 "params": { 00:27:13.411 "buf_count": 2048, 00:27:13.411 "large_cache_size": 16, 00:27:13.411 "sequence_count": 2048, 00:27:13.411 "small_cache_size": 128, 00:27:13.411 "task_count": 2048 00:27:13.411 } 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "bdev", 00:27:13.411 "config": [ 00:27:13.411 { 00:27:13.411 "method": "bdev_set_options", 00:27:13.411 "params": { 00:27:13.411 "bdev_auto_examine": true, 00:27:13.411 "bdev_io_cache_size": 256, 00:27:13.411 "bdev_io_pool_size": 65535, 00:27:13.411 "iobuf_large_cache_size": 16, 00:27:13.411 "iobuf_small_cache_size": 128 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_raid_set_options", 00:27:13.411 "params": { 00:27:13.411 "process_window_size_kb": 1024 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_iscsi_set_options", 00:27:13.411 "params": { 00:27:13.411 "timeout_sec": 30 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_nvme_set_options", 00:27:13.411 "params": { 00:27:13.411 "action_on_timeout": "none", 00:27:13.411 "allow_accel_sequence": false, 00:27:13.411 "arbitration_burst": 0, 00:27:13.411 "bdev_retry_count": 3, 00:27:13.411 "ctrlr_loss_timeout_sec": 0, 00:27:13.411 "delay_cmd_submit": true, 00:27:13.411 "dhchap_dhgroups": [ 00:27:13.411 "null", 00:27:13.411 "ffdhe2048", 00:27:13.411 "ffdhe3072", 00:27:13.411 "ffdhe4096", 00:27:13.411 "ffdhe6144", 00:27:13.411 "ffdhe8192" 00:27:13.411 ], 00:27:13.411 "dhchap_digests": [ 00:27:13.411 "sha256", 00:27:13.411 "sha384", 00:27:13.411 "sha512" 00:27:13.411 ], 00:27:13.411 "disable_auto_failback": false, 00:27:13.411 "fast_io_fail_timeout_sec": 0, 00:27:13.411 "generate_uuids": false, 00:27:13.411 "high_priority_weight": 0, 00:27:13.411 "io_path_stat": false, 00:27:13.411 "io_queue_requests": 512, 00:27:13.411 "keep_alive_timeout_ms": 10000, 00:27:13.411 "low_priority_weight": 0, 00:27:13.411 "medium_priority_weight": 0, 00:27:13.411 "nvme_adminq_poll_period_us": 10000, 00:27:13.411 "nvme_error_stat": false, 00:27:13.411 "nvme_ioq_poll_period_us": 0, 00:27:13.411 "rdma_cm_event_timeout_ms": 0, 00:27:13.411 "rdma_max_cq_size": 0, 00:27:13.411 "rdma_srq_size": 0, 00:27:13.411 "reconnect_delay_sec": 0, 00:27:13.411 "timeout_admin_us": 0, 00:27:13.411 "timeout_us": 0, 00:27:13.411 "transport_ack_timeout": 0, 00:27:13.411 "transport_retry_count": 4, 00:27:13.411 "transport_tos": 0 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_nvme_attach_controller", 00:27:13.411 "params": { 00:27:13.411 "adrfam": "IPv4", 00:27:13.411 "ctrlr_loss_timeout_sec": 0, 00:27:13.411 "ddgst": false, 00:27:13.411 "fast_io_fail_timeout_sec": 0, 00:27:13.411 "hdgst": false, 00:27:13.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:13.411 "name": "nvme0", 00:27:13.411 "prchk_guard": false, 00:27:13.411 "prchk_reftag": false, 00:27:13.411 "psk": "key0", 00:27:13.411 "reconnect_delay_sec": 0, 00:27:13.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.411 "traddr": "127.0.0.1", 00:27:13.411 "trsvcid": "4420", 00:27:13.411 "trtype": "TCP" 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_nvme_set_hotplug", 00:27:13.411 "params": { 00:27:13.411 "enable": false, 00:27:13.411 "period_us": 100000 00:27:13.411 } 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "method": "bdev_wait_for_examine" 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }, 00:27:13.411 { 00:27:13.411 "subsystem": "nbd", 00:27:13.411 "config": [] 00:27:13.411 } 00:27:13.411 ] 00:27:13.411 }' 00:27:13.411 17:30:43 -- keyring/file.sh@114 -- # killprocess 99197 00:27:13.411 17:30:43 -- common/autotest_common.sh@936 -- # '[' -z 99197 ']' 00:27:13.411 17:30:43 -- common/autotest_common.sh@940 -- # kill -0 99197 00:27:13.411 17:30:43 -- common/autotest_common.sh@941 -- # uname 00:27:13.411 17:30:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:13.411 17:30:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99197 00:27:13.411 killing process with pid 99197 00:27:13.411 Received shutdown signal, test time was about 1.000000 seconds 00:27:13.411 00:27:13.411 Latency(us) 00:27:13.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.411 =================================================================================================================== 00:27:13.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.411 17:30:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:13.411 17:30:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:13.411 17:30:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99197' 00:27:13.411 17:30:43 -- common/autotest_common.sh@955 -- # kill 99197 00:27:13.411 17:30:43 -- common/autotest_common.sh@960 -- # wait 99197 00:27:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.671 17:30:43 -- keyring/file.sh@117 -- # bperfpid=99653 00:27:13.671 17:30:43 -- keyring/file.sh@119 -- # waitforlisten 99653 /var/tmp/bperf.sock 00:27:13.671 17:30:43 -- common/autotest_common.sh@817 -- # '[' -z 99653 ']' 00:27:13.671 17:30:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.671 17:30:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:13.671 17:30:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.671 17:30:43 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:13.671 17:30:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:13.671 17:30:43 -- keyring/file.sh@115 -- # echo '{ 00:27:13.671 "subsystems": [ 00:27:13.671 { 00:27:13.671 "subsystem": "keyring", 00:27:13.671 "config": [ 00:27:13.671 { 00:27:13.671 "method": "keyring_file_add_key", 00:27:13.671 "params": { 00:27:13.671 "name": "key0", 00:27:13.671 "path": "/tmp/tmp.SNW1BsvmjP" 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "keyring_file_add_key", 00:27:13.671 "params": { 00:27:13.671 "name": "key1", 00:27:13.671 "path": "/tmp/tmp.ambGVJccDd" 00:27:13.671 } 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "iobuf", 00:27:13.671 "config": [ 00:27:13.671 { 00:27:13.671 "method": "iobuf_set_options", 00:27:13.671 "params": { 00:27:13.671 "large_bufsize": 135168, 00:27:13.671 "large_pool_count": 1024, 00:27:13.671 "small_bufsize": 8192, 00:27:13.671 "small_pool_count": 8192 00:27:13.671 } 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "sock", 00:27:13.671 "config": [ 00:27:13.671 { 00:27:13.671 "method": "sock_impl_set_options", 00:27:13.671 "params": { 00:27:13.671 "enable_ktls": false, 00:27:13.671 "enable_placement_id": 0, 00:27:13.671 "enable_quickack": false, 00:27:13.671 "enable_recv_pipe": true, 00:27:13.671 "enable_zerocopy_send_client": false, 00:27:13.671 "enable_zerocopy_send_server": true, 00:27:13.671 "impl_name": "posix", 00:27:13.671 "recv_buf_size": 2097152, 00:27:13.671 "send_buf_size": 2097152, 00:27:13.671 "tls_version": 0, 00:27:13.671 "zerocopy_threshold": 0 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "sock_impl_set_options", 00:27:13.671 "params": { 00:27:13.671 "enable_ktls": false, 00:27:13.671 "enable_placement_id": 0, 00:27:13.671 "enable_quickack": false, 00:27:13.671 "enable_recv_pipe": true, 00:27:13.671 "enable_zerocopy_send_client": false, 00:27:13.671 "enable_zerocopy_send_server": true, 00:27:13.671 "impl_name": "ssl", 00:27:13.671 "recv_buf_size": 4096, 00:27:13.671 "send_buf_size": 4096, 00:27:13.671 "tls_version": 0, 00:27:13.671 "zerocopy_threshold": 0 00:27:13.671 } 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "vmd", 00:27:13.671 "config": [] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "accel", 00:27:13.671 "config": [ 00:27:13.671 { 00:27:13.671 "method": "accel_set_options", 00:27:13.671 "params": { 00:27:13.671 "buf_count": 2048, 00:27:13.671 "large_cache_size": 16, 00:27:13.671 "sequence_count": 2048, 00:27:13.671 "small_cache_size": 128, 00:27:13.671 "task_count": 2048 00:27:13.671 } 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "bdev", 00:27:13.671 "config": [ 00:27:13.671 { 00:27:13.671 "method": "bdev_set_options", 00:27:13.671 "params": { 00:27:13.671 "bdev_auto_examine": true, 00:27:13.671 "bdev_io_cache_size": 256, 00:27:13.671 "bdev_io_pool_size": 65535, 00:27:13.671 "iobuf_large_cache_size": 16, 00:27:13.671 "iobuf_small_cache_size": 128 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_raid_set_options", 00:27:13.671 "params": { 00:27:13.671 "process_window_size_kb": 1024 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_iscsi_set_options", 00:27:13.671 "params": { 00:27:13.671 "timeout_sec": 30 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_nvme_set_options", 00:27:13.671 "params": { 00:27:13.671 "action_on_timeout": "none", 00:27:13.671 "allow_accel_sequence": false, 00:27:13.671 "arbitration_burst": 0, 00:27:13.671 "bdev_retry_count": 3, 00:27:13.671 "ctrlr_loss_timeout_sec": 0, 00:27:13.671 "delay_cmd_submit": true, 00:27:13.671 "dhchap_dhgroups": [ 00:27:13.671 "null", 00:27:13.671 "ffdhe2048", 00:27:13.671 "ffdhe3072", 00:27:13.671 "ffdhe4096", 00:27:13.671 "ffdhe6144", 00:27:13.671 "ffdhe8192" 00:27:13.671 ], 00:27:13.671 "dhchap_digests": [ 00:27:13.671 "sha256", 00:27:13.671 "sha384", 00:27:13.671 "sha512" 00:27:13.671 ], 00:27:13.671 "disable_auto_failback": false, 00:27:13.671 "fast_io_fail_timeout_sec": 0, 00:27:13.671 "generate_uuids": false, 00:27:13.671 "high_priority_weight": 0, 00:27:13.671 "io_path_stat": false, 00:27:13.671 "io_queue_requests": 512, 00:27:13.671 "keep_alive_timeout_ms": 10000, 00:27:13.671 "low_priority_weight": 0, 00:27:13.671 "medium_priority_weight": 0, 00:27:13.671 "nvme_adminq_poll_period_us": 10000, 00:27:13.671 "nvme_error_stat": false, 00:27:13.671 "nvme_ioq_poll_period_us": 0, 00:27:13.671 "rdma_cm_event_timeout_ms": 0, 00:27:13.671 "rdma_max_cq_size": 0, 00:27:13.671 "rdma_srq_size": 0, 00:27:13.671 "reconnect_delay_sec": 0, 00:27:13.671 "timeout_admin_us": 0, 00:27:13.671 "timeout_us": 0, 00:27:13.671 "transport_ack_timeout": 0, 00:27:13.671 "transport_retry_count": 4, 00:27:13.671 "transport_tos": 0 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_nvme_attach_controller", 00:27:13.671 "params": { 00:27:13.671 "adrfam": "IPv4", 00:27:13.671 "ctrlr_loss_timeout_sec": 0, 00:27:13.671 "ddgst": false, 00:27:13.671 "fast_io_fail_timeout_sec": 0, 00:27:13.671 "hdgst": false, 00:27:13.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:13.671 "name": "nvme0", 00:27:13.671 "prchk_guard": false, 00:27:13.671 "prchk_reftag": false, 00:27:13.671 "psk": "key0", 00:27:13.671 "reconnect_delay_sec": 0, 00:27:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:13.671 "traddr": "127.0.0.1", 00:27:13.671 "trsvcid": "4420", 00:27:13.671 "trtype": "TCP" 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_nvme_set_hotplug", 00:27:13.671 "params": { 00:27:13.671 "enable": false, 00:27:13.671 "period_us": 100000 00:27:13.671 } 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "method": "bdev_wait_for_examine" 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }, 00:27:13.671 { 00:27:13.671 "subsystem": "nbd", 00:27:13.671 "config": [] 00:27:13.671 } 00:27:13.671 ] 00:27:13.671 }' 00:27:13.671 17:30:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.671 [2024-04-25 17:30:43.586786] Starting SPDK v24.05-pre git sha1 06472fb6d / DPDK 23.11.0 initialization... 00:27:13.671 [2024-04-25 17:30:43.586869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99653 ] 00:27:13.930 [2024-04-25 17:30:43.718089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.930 [2024-04-25 17:30:43.769696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.930 [2024-04-25 17:30:43.899725] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:14.865 17:30:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:14.865 17:30:44 -- common/autotest_common.sh@850 -- # return 0 00:27:14.865 17:30:44 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:14.865 17:30:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:14.865 17:30:44 -- keyring/file.sh@120 -- # jq length 00:27:14.866 17:30:44 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:14.866 17:30:44 -- keyring/file.sh@121 -- # get_refcnt key0 00:27:14.866 17:30:44 -- keyring/common.sh@12 -- # get_key key0 00:27:14.866 17:30:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:14.866 17:30:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:14.866 17:30:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:14.866 17:30:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.125 17:30:44 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:15.125 17:30:44 -- keyring/file.sh@122 -- # get_refcnt key1 00:27:15.125 17:30:44 -- keyring/common.sh@12 -- # get_key key1 00:27:15.125 17:30:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.125 17:30:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.125 17:30:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.125 17:30:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:15.384 17:30:45 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:15.384 17:30:45 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:15.384 17:30:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:15.384 17:30:45 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:15.644 17:30:45 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:15.644 17:30:45 -- keyring/file.sh@1 -- # cleanup 00:27:15.644 17:30:45 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SNW1BsvmjP /tmp/tmp.ambGVJccDd 00:27:15.644 17:30:45 -- keyring/file.sh@20 -- # killprocess 99653 00:27:15.644 17:30:45 -- common/autotest_common.sh@936 -- # '[' -z 99653 ']' 00:27:15.644 17:30:45 -- common/autotest_common.sh@940 -- # kill -0 99653 00:27:15.644 17:30:45 -- common/autotest_common.sh@941 -- # uname 00:27:15.644 17:30:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.644 17:30:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99653 00:27:15.644 17:30:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:15.644 17:30:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:15.644 killing process with pid 99653 00:27:15.644 17:30:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99653' 00:27:15.644 17:30:45 -- common/autotest_common.sh@955 -- # kill 99653 00:27:15.644 Received shutdown signal, test time was about 1.000000 seconds 00:27:15.644 00:27:15.644 Latency(us) 00:27:15.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.644 =================================================================================================================== 00:27:15.644 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:15.644 17:30:45 -- common/autotest_common.sh@960 -- # wait 99653 00:27:15.903 17:30:45 -- keyring/file.sh@21 -- # killprocess 99162 00:27:15.904 17:30:45 -- common/autotest_common.sh@936 -- # '[' -z 99162 ']' 00:27:15.904 17:30:45 -- common/autotest_common.sh@940 -- # kill -0 99162 00:27:15.904 17:30:45 -- common/autotest_common.sh@941 -- # uname 00:27:15.904 17:30:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.904 17:30:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99162 00:27:15.904 17:30:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:15.904 17:30:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:15.904 killing process with pid 99162 00:27:15.904 17:30:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99162' 00:27:15.904 17:30:45 -- common/autotest_common.sh@955 -- # kill 99162 00:27:15.904 [2024-04-25 17:30:45.681492] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:15.904 17:30:45 -- common/autotest_common.sh@960 -- # wait 99162 00:27:16.163 ************************************ 00:27:16.163 END TEST keyring_file 00:27:16.163 00:27:16.163 real 0m14.355s 00:27:16.163 user 0m35.830s 00:27:16.163 sys 0m2.705s 00:27:16.163 17:30:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:16.163 17:30:45 -- common/autotest_common.sh@10 -- # set +x 00:27:16.163 ************************************ 00:27:16.163 17:30:45 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:27:16.163 17:30:45 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:27:16.163 17:30:45 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:16.163 17:30:45 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:16.163 17:30:45 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:27:16.163 17:30:45 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:27:16.163 17:30:45 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:27:16.163 17:30:45 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:27:16.163 17:30:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:16.163 17:30:45 -- common/autotest_common.sh@10 -- # set +x 00:27:16.163 17:30:45 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:27:16.163 17:30:45 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:27:16.163 17:30:45 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:27:16.163 17:30:45 -- common/autotest_common.sh@10 -- # set +x 00:27:18.069 INFO: APP EXITING 00:27:18.069 INFO: killing all VMs 00:27:18.069 INFO: killing vhost app 00:27:18.069 INFO: EXIT DONE 00:27:18.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.587 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:18.587 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:19.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:19.154 Cleaning 00:27:19.154 Removing: /var/run/dpdk/spdk0/config 00:27:19.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:19.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:19.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:19.154 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:19.154 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:19.154 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:19.154 Removing: /var/run/dpdk/spdk1/config 00:27:19.154 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:19.154 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:19.154 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:19.154 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:19.154 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:19.154 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:19.154 Removing: /var/run/dpdk/spdk2/config 00:27:19.154 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:19.154 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:19.154 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:19.154 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:19.154 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:19.154 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:19.414 Removing: /var/run/dpdk/spdk3/config 00:27:19.414 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:19.414 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:19.414 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:19.414 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:19.414 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:19.414 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:19.414 Removing: /var/run/dpdk/spdk4/config 00:27:19.414 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:19.414 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:19.414 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:19.414 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:19.414 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:19.414 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:19.414 Removing: /dev/shm/nvmf_trace.0 00:27:19.414 Removing: /dev/shm/spdk_tgt_trace.pid60472 00:27:19.414 Removing: /var/run/dpdk/spdk0 00:27:19.414 Removing: /var/run/dpdk/spdk1 00:27:19.414 Removing: /var/run/dpdk/spdk2 00:27:19.414 Removing: /var/run/dpdk/spdk3 00:27:19.414 Removing: /var/run/dpdk/spdk4 00:27:19.414 Removing: /var/run/dpdk/spdk_pid60308 00:27:19.414 Removing: /var/run/dpdk/spdk_pid60472 00:27:19.414 Removing: /var/run/dpdk/spdk_pid60763 00:27:19.414 Removing: /var/run/dpdk/spdk_pid60860 00:27:19.414 Removing: /var/run/dpdk/spdk_pid60881 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61005 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61016 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61144 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61424 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61594 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61681 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61777 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61864 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61902 00:27:19.414 Removing: /var/run/dpdk/spdk_pid61936 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62009 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62129 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62749 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62798 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62858 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62886 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62959 00:27:19.414 Removing: /var/run/dpdk/spdk_pid62979 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63051 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63079 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63141 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63152 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63202 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63224 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63375 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63420 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63494 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63558 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63591 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63660 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63704 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63737 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63782 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63815 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63854 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63892 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63931 00:27:19.414 Removing: /var/run/dpdk/spdk_pid63969 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64009 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64048 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64087 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64120 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64158 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64198 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64236 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64275 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64312 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64358 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64391 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64440 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64509 00:27:19.414 Removing: /var/run/dpdk/spdk_pid64617 00:27:19.414 Removing: /var/run/dpdk/spdk_pid65044 00:27:19.414 Removing: /var/run/dpdk/spdk_pid71799 00:27:19.414 Removing: /var/run/dpdk/spdk_pid72128 00:27:19.414 Removing: /var/run/dpdk/spdk_pid73310 00:27:19.414 Removing: /var/run/dpdk/spdk_pid73691 00:27:19.414 Removing: /var/run/dpdk/spdk_pid73945 00:27:19.414 Removing: /var/run/dpdk/spdk_pid73995 00:27:19.414 Removing: /var/run/dpdk/spdk_pid74833 00:27:19.414 Removing: /var/run/dpdk/spdk_pid74841 00:27:19.414 Removing: /var/run/dpdk/spdk_pid74899 00:27:19.414 Removing: /var/run/dpdk/spdk_pid74953 00:27:19.414 Removing: /var/run/dpdk/spdk_pid75013 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75057 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75059 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75090 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75127 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75129 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75187 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75244 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75306 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75345 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75347 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75378 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75642 00:27:19.674 Removing: /var/run/dpdk/spdk_pid75792 00:27:19.674 Removing: /var/run/dpdk/spdk_pid76059 00:27:19.674 Removing: /var/run/dpdk/spdk_pid76109 00:27:19.674 Removing: /var/run/dpdk/spdk_pid76482 00:27:19.674 Removing: /var/run/dpdk/spdk_pid77024 00:27:19.674 Removing: /var/run/dpdk/spdk_pid77436 00:27:19.674 Removing: /var/run/dpdk/spdk_pid78353 00:27:19.674 Removing: /var/run/dpdk/spdk_pid79283 00:27:19.674 Removing: /var/run/dpdk/spdk_pid79400 00:27:19.674 Removing: /var/run/dpdk/spdk_pid79462 00:27:19.674 Removing: /var/run/dpdk/spdk_pid80933 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81164 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81596 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81707 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81860 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81900 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81940 00:27:19.674 Removing: /var/run/dpdk/spdk_pid81990 00:27:19.674 Removing: /var/run/dpdk/spdk_pid82144 00:27:19.674 Removing: /var/run/dpdk/spdk_pid82291 00:27:19.674 Removing: /var/run/dpdk/spdk_pid82544 00:27:19.674 Removing: /var/run/dpdk/spdk_pid82667 00:27:19.674 Removing: /var/run/dpdk/spdk_pid82909 00:27:19.674 Removing: /var/run/dpdk/spdk_pid83021 00:27:19.674 Removing: /var/run/dpdk/spdk_pid83139 00:27:19.674 Removing: /var/run/dpdk/spdk_pid83475 00:27:19.674 Removing: /var/run/dpdk/spdk_pid83856 00:27:19.674 Removing: /var/run/dpdk/spdk_pid83858 00:27:19.674 Removing: /var/run/dpdk/spdk_pid86091 00:27:19.674 Removing: /var/run/dpdk/spdk_pid86404 00:27:19.674 Removing: /var/run/dpdk/spdk_pid86892 00:27:19.674 Removing: /var/run/dpdk/spdk_pid86900 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87235 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87253 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87273 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87299 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87304 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87443 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87451 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87559 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87561 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87668 00:27:19.674 Removing: /var/run/dpdk/spdk_pid87671 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88094 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88138 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88218 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88271 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88613 00:27:19.674 Removing: /var/run/dpdk/spdk_pid88848 00:27:19.674 Removing: /var/run/dpdk/spdk_pid89309 00:27:19.674 Removing: /var/run/dpdk/spdk_pid89812 00:27:19.674 Removing: /var/run/dpdk/spdk_pid90382 00:27:19.674 Removing: /var/run/dpdk/spdk_pid90390 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92323 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92409 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92479 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92552 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92713 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92798 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92869 00:27:19.674 Removing: /var/run/dpdk/spdk_pid92960 00:27:19.674 Removing: /var/run/dpdk/spdk_pid93308 00:27:19.674 Removing: /var/run/dpdk/spdk_pid93971 00:27:19.674 Removing: /var/run/dpdk/spdk_pid95309 00:27:19.674 Removing: /var/run/dpdk/spdk_pid95509 00:27:19.674 Removing: /var/run/dpdk/spdk_pid95789 00:27:19.674 Removing: /var/run/dpdk/spdk_pid96082 00:27:19.674 Removing: /var/run/dpdk/spdk_pid96615 00:27:19.674 Removing: /var/run/dpdk/spdk_pid96620 00:27:19.674 Removing: /var/run/dpdk/spdk_pid96993 00:27:19.674 Removing: /var/run/dpdk/spdk_pid97156 00:27:19.938 Removing: /var/run/dpdk/spdk_pid97317 00:27:19.938 Removing: /var/run/dpdk/spdk_pid97404 00:27:19.938 Removing: /var/run/dpdk/spdk_pid97559 00:27:19.938 Removing: /var/run/dpdk/spdk_pid97672 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98349 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98379 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98420 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98668 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98710 00:27:19.938 Removing: /var/run/dpdk/spdk_pid98741 00:27:19.938 Removing: /var/run/dpdk/spdk_pid99162 00:27:19.938 Removing: /var/run/dpdk/spdk_pid99197 00:27:19.938 Removing: /var/run/dpdk/spdk_pid99653 00:27:19.938 Clean 00:27:19.938 17:30:49 -- common/autotest_common.sh@1437 -- # return 0 00:27:19.938 17:30:49 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:27:19.938 17:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.938 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:27:19.938 17:30:49 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:27:19.938 17:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:19.938 17:30:49 -- common/autotest_common.sh@10 -- # set +x 00:27:19.938 17:30:49 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:19.938 17:30:49 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:19.938 17:30:49 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:20.220 17:30:49 -- spdk/autotest.sh@389 -- # hash lcov 00:27:20.220 17:30:49 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:20.220 17:30:49 -- spdk/autotest.sh@391 -- # hostname 00:27:20.220 17:30:49 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:20.220 geninfo: WARNING: invalid characters removed from testname! 00:27:42.166 17:31:10 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:44.701 17:31:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.606 17:31:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:49.142 17:31:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.677 17:31:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:53.580 17:31:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:56.113 17:31:25 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:56.114 17:31:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:56.114 17:31:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:56.114 17:31:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.114 17:31:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.114 17:31:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.114 17:31:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.114 17:31:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.114 17:31:25 -- paths/export.sh@5 -- $ export PATH 00:27:56.114 17:31:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.114 17:31:25 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:56.114 17:31:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:56.114 17:31:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714066285.XXXXXX 00:27:56.114 17:31:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714066285.tuOGo6 00:27:56.114 17:31:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:56.114 17:31:25 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:56.114 17:31:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:56.114 17:31:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:56.114 17:31:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:56.114 17:31:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:56.114 17:31:25 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:27:56.114 17:31:25 -- common/autotest_common.sh@10 -- $ set +x 00:27:56.114 17:31:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:27:56.114 17:31:25 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:27:56.114 17:31:25 -- pm/common@17 -- $ local monitor 00:27:56.114 17:31:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:56.114 17:31:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=101337 00:27:56.114 17:31:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:56.114 17:31:25 -- pm/common@21 -- $ date +%s 00:27:56.114 17:31:25 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=101339 00:27:56.114 17:31:25 -- pm/common@26 -- $ sleep 1 00:27:56.114 17:31:25 -- pm/common@21 -- $ date +%s 00:27:56.114 17:31:25 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714066285 00:27:56.114 17:31:25 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714066285 00:27:56.114 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714066285_collect-vmstat.pm.log 00:27:56.114 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714066285_collect-cpu-load.pm.log 00:27:57.051 17:31:26 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:27:57.051 17:31:26 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:57.051 17:31:26 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:57.051 17:31:26 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:57.051 17:31:26 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:57.051 17:31:26 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:57.051 17:31:26 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:57.051 17:31:26 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:57.051 17:31:26 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:57.051 17:31:26 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:57.051 17:31:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:57.051 17:31:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:57.051 17:31:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:27:57.051 17:31:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:27:57.051 17:31:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:57.051 17:31:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:57.051 17:31:27 -- pm/common@45 -- $ pid=101346 00:27:57.051 17:31:27 -- pm/common@52 -- $ sudo kill -TERM 101346 00:27:57.310 17:31:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:57.310 17:31:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:57.310 17:31:27 -- pm/common@45 -- $ pid=101345 00:27:57.310 17:31:27 -- pm/common@52 -- $ sudo kill -TERM 101345 00:27:57.310 + [[ -n 5155 ]] 00:27:57.310 + sudo kill 5155 00:27:57.319 [Pipeline] } 00:27:57.339 [Pipeline] // timeout 00:27:57.345 [Pipeline] } 00:27:57.363 [Pipeline] // stage 00:27:57.370 [Pipeline] } 00:27:57.389 [Pipeline] // catchError 00:27:57.399 [Pipeline] stage 00:27:57.401 [Pipeline] { (Stop VM) 00:27:57.418 [Pipeline] sh 00:27:57.700 + vagrant halt 00:28:01.025 ==> default: Halting domain... 00:28:07.599 [Pipeline] sh 00:28:07.878 + vagrant destroy -f 00:28:10.415 ==> default: Removing domain... 00:28:10.687 [Pipeline] sh 00:28:10.969 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:28:10.978 [Pipeline] } 00:28:10.996 [Pipeline] // stage 00:28:11.002 [Pipeline] } 00:28:11.020 [Pipeline] // dir 00:28:11.026 [Pipeline] } 00:28:11.044 [Pipeline] // wrap 00:28:11.051 [Pipeline] } 00:28:11.067 [Pipeline] // catchError 00:28:11.078 [Pipeline] stage 00:28:11.080 [Pipeline] { (Epilogue) 00:28:11.094 [Pipeline] sh 00:28:11.376 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:16.660 [Pipeline] catchError 00:28:16.662 [Pipeline] { 00:28:16.676 [Pipeline] sh 00:28:16.957 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:16.957 Artifacts sizes are good 00:28:16.968 [Pipeline] } 00:28:16.989 [Pipeline] // catchError 00:28:17.003 [Pipeline] archiveArtifacts 00:28:17.012 Archiving artifacts 00:28:17.185 [Pipeline] cleanWs 00:28:17.197 [WS-CLEANUP] Deleting project workspace... 00:28:17.197 [WS-CLEANUP] Deferred wipeout is used... 00:28:17.204 [WS-CLEANUP] done 00:28:17.206 [Pipeline] } 00:28:17.226 [Pipeline] // stage 00:28:17.232 [Pipeline] } 00:28:17.250 [Pipeline] // node 00:28:17.256 [Pipeline] End of Pipeline 00:28:17.293 Finished: SUCCESS